sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
| tokens_length
sequencelengths 1
353
| input_texts
sequencelengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dc06bca752950307c4729182e6af2feec178c943 | # Baidu ULTR Dataset - Baidu BERT-12l-12h
Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank
dataset](https://arxiv.org/abs/2207.03051).
This dataset uses the BERT cross-encoder with 12 layers from Baidu released in the [official starter-kit](https://github.com/ChuXiaokai/baidu_ultr_dataset/) to compute query-document vectors (768 dims).
## Setup
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
4. You can now use the dataset as described below.
## Load train / test click dataset:
```Python
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_baidu-mlm-ctr",
name="clicks",
split="train", # ["train", "test"]
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```
## Load expert annotations:
```Python
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_baidu-mlm-ctr",
name="annotations",
split="test",
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```
## Available features
Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
### Click dataset
| name | dtype | description |
|------------------------------|----------------|-------------|
| query_id | string | Baidu query_id |
| query_md5 | string | MD5 hash of query text |
| query | List[int32] | List of query tokens |
| query_length | int32 | Number of query tokens |
| n | int32 | Number of documents for current query, useful for padding |
| url_md5 | List[string] | MD5 hash of document URL, most reliable document identifier |
| text_md5 | List[string] | MD5 hash of document title and abstract |
| title | List[List[int32]] | List of tokens for document titles |
| abstract | List[List[int32]] | List of tokens for document abstracts |
| query_document_embedding | Tensor[Tensor[float16]]| BERT CLS token |
| click | Tensor[int32] | Click / no click on a document |
| position | Tensor[int32] | Position in ranking (does not always match original item position) |
| media_type | Tensor[int32] | Document type (label encoding recommended as IDs do not occupy a continuous integer range) |
| displayed_time | Tensor[float32]| Seconds a document was displayed on the screen |
| serp_height | Tensor[int32] | Pixel height of a document on the screen |
| slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off the screen after previously clicking on it |
| bm25 | Tensor[float32] | BM25 score for documents |
| bm25_title | Tensor[float32] | BM25 score for document titles |
| bm25_abstract | Tensor[float32] | BM25 score for document abstracts |
| tf_idf | Tensor[float32] | TF-IDF score for documents |
| tf | Tensor[float32] | Term frequency for documents |
| idf | Tensor[float32] | Inverse document frequency for documents |
| ql_jelinek_mercer_short | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1) |
| ql_jelinek_mercer_long | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7) |
| ql_dirichlet | Tensor[float32] | Query likelihood score for documents using Dirichlet smoothing (lambda = 128) |
| document_length | Tensor[int32] | Length of documents |
| title_length | Tensor[int32] | Length of document titles |
| abstract_length | Tensor[int32] | Length of document abstracts |
### Expert annotation dataset
| name | dtype | description |
|------------------------------|----------------|-------------|
| query_id | string | Baidu query_id |
| query_md5 | string | MD5 hash of query text |
| query | List[int32] | List of query tokens |
| query_length | int32 | Number of query tokens |
| frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
| n | int32 | Number of documents for current query, useful for padding |
| url_md5 | List[string] | MD5 hash of document URL, most reliable document identifier |
| text_md5 | List[string] | MD5 hash of document title and abstract |
| title | List[List[int32]] | List of tokens for document titles |
| abstract | List[List[int32]] | List of tokens for document abstracts |
| query_document_embedding | Tensor[Tensor[float16]] | BERT CLS token |
| label | Tensor[int32] | Relevance judgments on a scale from 0 (bad) to 4 (excellent) |
| bm25 | Tensor[float32] | BM25 score for documents |
| bm25_title | Tensor[float32] | BM25 score for document titles |
| bm25_abstract | Tensor[float32] | BM25 score for document abstracts |
| tf_idf | Tensor[float32] | TF-IDF score for documents |
| tf | Tensor[float32] | Term frequency for documents |
| idf | Tensor[float32] | Inverse document frequency for documents |
| ql_jelinek_mercer_short | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1) |
| ql_jelinek_mercer_long | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7) |
| ql_dirichlet | Tensor[float32] | Query likelihood score for documents using Dirichlet smoothing (lambda = 128) |
| document_length | Tensor[int32] | Length of documents |
| title_length | Tensor[int32] | Length of document titles |
| abstract_length | Tensor[int32] | Length of document abstracts |
## Example PyTorch collate function
Each sample in the dataset is a single query with multiple documents.
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
```Python
import torch
from typing import List
from collections import defaultdict
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader
def collate_clicks(samples: List):
batch = defaultdict(lambda: [])
for sample in samples:
batch["query_document_embedding"].append(sample["query_document_embedding"])
batch["position"].append(sample["position"])
batch["click"].append(sample["click"])
batch["n"].append(sample["n"])
return {
"query_document_embedding": pad_sequence(
batch["query_document_embedding"], batch_first=True
),
"position": pad_sequence(batch["position"], batch_first=True),
"click": pad_sequence(batch["click"], batch_first=True),
"n": torch.tensor(batch["n"]),
}
loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
```
| philipphager/baidu-ultr_baidu-mlm-ctr | [
"license:cc-by-nc-4.0",
"arxiv:2207.03051",
"region:us"
] | 2024-01-15T12:35:50+00:00 | {"license": "cc-by-nc-4.0", "viewer": false} | 2024-02-01T08:49:55+00:00 | [
"2207.03051"
] | [] | TAGS
#license-cc-by-nc-4.0 #arxiv-2207.03051 #region-us
| Baidu ULTR Dataset - Baidu BERT-12l-12h
=======================================
Query-document vectors and clicks for a subset of the Baidu Unbiased Learning to Rank
dataset.
This dataset uses the BERT cross-encoder with 12 layers from Baidu released in the official starter-kit to compute query-document vectors (768 dims).
Setup
-----
1. Install huggingface datasets
2. Install pandas and pyarrow: 'pip install pandas pyarrow'
3. Optionally, you might need to install a pyarrow-hotfix if you cannot install 'pyarrow >= 14.0.1'
4. You can now use the dataset as described below.
Load train / test click dataset:
--------------------------------
Load expert annotations:
------------------------
Available features
------------------
Each row of the click / annotation dataset contains the following attributes. Use a custom 'collate\_fn' to select specific features (see below):
### Click dataset
name: query\_id, dtype: string, description: Baidu query\_id
name: query\_md5, dtype: string, description: MD5 hash of query text
name: query, dtype: List[int32], description: List of query tokens
name: query\_length, dtype: int32, description: Number of query tokens
name: n, dtype: int32, description: Number of documents for current query, useful for padding
name: url\_md5, dtype: List[string], description: MD5 hash of document URL, most reliable document identifier
name: text\_md5, dtype: List[string], description: MD5 hash of document title and abstract
name: title, dtype: List[List[int32]], description: List of tokens for document titles
name: abstract, dtype: List[List[int32]], description: List of tokens for document abstracts
name: query\_document\_embedding, dtype: Tensor[Tensor[float16]], description: BERT CLS token
name: click, dtype: Tensor[int32], description: Click / no click on a document
name: position, dtype: Tensor[int32], description: Position in ranking (does not always match original item position)
name: media\_type, dtype: Tensor[int32], description: Document type (label encoding recommended as IDs do not occupy a continuous integer range)
name: displayed\_time, dtype: Tensor[float32], description: Seconds a document was displayed on the screen
name: serp\_height, dtype: Tensor[int32], description: Pixel height of a document on the screen
name: slipoff\_count\_after\_click, dtype: Tensor[int32], description: Number of times a document was scrolled off the screen after previously clicking on it
name: bm25, dtype: Tensor[float32], description: BM25 score for documents
name: bm25\_title, dtype: Tensor[float32], description: BM25 score for document titles
name: bm25\_abstract, dtype: Tensor[float32], description: BM25 score for document abstracts
name: tf\_idf, dtype: Tensor[float32], description: TF-IDF score for documents
name: tf, dtype: Tensor[float32], description: Term frequency for documents
name: idf, dtype: Tensor[float32], description: Inverse document frequency for documents
name: ql\_jelinek\_mercer\_short, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1)
name: ql\_jelinek\_mercer\_long, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7)
name: ql\_dirichlet, dtype: Tensor[float32], description: Query likelihood score for documents using Dirichlet smoothing (lambda = 128)
name: document\_length, dtype: Tensor[int32], description: Length of documents
name: title\_length, dtype: Tensor[int32], description: Length of document titles
name: abstract\_length, dtype: Tensor[int32], description: Length of document abstracts
### Expert annotation dataset
name: query\_id, dtype: string, description: Baidu query\_id
name: query\_md5, dtype: string, description: MD5 hash of query text
name: query, dtype: List[int32], description: List of query tokens
name: query\_length, dtype: int32, description: Number of query tokens
name: frequency\_bucket, dtype: int32, description: Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency)
name: n, dtype: int32, description: Number of documents for current query, useful for padding
name: url\_md5, dtype: List[string], description: MD5 hash of document URL, most reliable document identifier
name: text\_md5, dtype: List[string], description: MD5 hash of document title and abstract
name: title, dtype: List[List[int32]], description: List of tokens for document titles
name: abstract, dtype: List[List[int32]], description: List of tokens for document abstracts
name: query\_document\_embedding, dtype: Tensor[Tensor[float16]], description: BERT CLS token
name: label, dtype: Tensor[int32], description: Relevance judgments on a scale from 0 (bad) to 4 (excellent)
name: bm25, dtype: Tensor[float32], description: BM25 score for documents
name: bm25\_title, dtype: Tensor[float32], description: BM25 score for document titles
name: bm25\_abstract, dtype: Tensor[float32], description: BM25 score for document abstracts
name: tf\_idf, dtype: Tensor[float32], description: TF-IDF score for documents
name: tf, dtype: Tensor[float32], description: Term frequency for documents
name: idf, dtype: Tensor[float32], description: Inverse document frequency for documents
name: ql\_jelinek\_mercer\_short, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1)
name: ql\_jelinek\_mercer\_long, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7)
name: ql\_dirichlet, dtype: Tensor[float32], description: Query likelihood score for documents using Dirichlet smoothing (lambda = 128)
name: document\_length, dtype: Tensor[int32], description: Length of documents
name: title\_length, dtype: Tensor[int32], description: Length of document titles
name: abstract\_length, dtype: Tensor[int32], description: Length of document abstracts
Example PyTorch collate function
--------------------------------
Each sample in the dataset is a single query with multiple documents.
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
| [
"### Click dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: query, dtype: List[int32], description: List of query tokens\nname: query\\_length, dtype: int32, description: Number of query tokens\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: url\\_md5, dtype: List[string], description: MD5 hash of document URL, most reliable document identifier\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: title, dtype: List[List[int32]], description: List of tokens for document titles\nname: abstract, dtype: List[List[int32]], description: List of tokens for document abstracts\nname: query\\_document\\_embedding, dtype: Tensor[Tensor[float16]], description: BERT CLS token\nname: click, dtype: Tensor[int32], description: Click / no click on a document\nname: position, dtype: Tensor[int32], description: Position in ranking (does not always match original item position)\nname: media\\_type, dtype: Tensor[int32], description: Document type (label encoding recommended as IDs do not occupy a continuous integer range)\nname: displayed\\_time, dtype: Tensor[float32], description: Seconds a document was displayed on the screen\nname: serp\\_height, dtype: Tensor[int32], description: Pixel height of a document on the screen\nname: slipoff\\_count\\_after\\_click, dtype: Tensor[int32], description: Number of times a document was scrolled off the screen after previously clicking on it\nname: bm25, dtype: Tensor[float32], description: BM25 score for documents\nname: bm25\\_title, dtype: Tensor[float32], description: BM25 score for document titles\nname: bm25\\_abstract, dtype: Tensor[float32], description: BM25 score for document abstracts\nname: tf\\_idf, dtype: Tensor[float32], description: TF-IDF score for documents\nname: tf, dtype: Tensor[float32], description: Term frequency for documents\nname: idf, dtype: Tensor[float32], description: Inverse document frequency for documents\nname: ql\\_jelinek\\_mercer\\_short, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1)\nname: ql\\_jelinek\\_mercer\\_long, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7)\nname: ql\\_dirichlet, dtype: Tensor[float32], description: Query likelihood score for documents using Dirichlet smoothing (lambda = 128)\nname: document\\_length, dtype: Tensor[int32], description: Length of documents\nname: title\\_length, dtype: Tensor[int32], description: Length of document titles\nname: abstract\\_length, dtype: Tensor[int32], description: Length of document abstracts",
"### Expert annotation dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: query, dtype: List[int32], description: List of query tokens\nname: query\\_length, dtype: int32, description: Number of query tokens\nname: frequency\\_bucket, dtype: int32, description: Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency)\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: url\\_md5, dtype: List[string], description: MD5 hash of document URL, most reliable document identifier\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: title, dtype: List[List[int32]], description: List of tokens for document titles\nname: abstract, dtype: List[List[int32]], description: List of tokens for document abstracts\nname: query\\_document\\_embedding, dtype: Tensor[Tensor[float16]], description: BERT CLS token\nname: label, dtype: Tensor[int32], description: Relevance judgments on a scale from 0 (bad) to 4 (excellent)\nname: bm25, dtype: Tensor[float32], description: BM25 score for documents\nname: bm25\\_title, dtype: Tensor[float32], description: BM25 score for document titles\nname: bm25\\_abstract, dtype: Tensor[float32], description: BM25 score for document abstracts\nname: tf\\_idf, dtype: Tensor[float32], description: TF-IDF score for documents\nname: tf, dtype: Tensor[float32], description: Term frequency for documents\nname: idf, dtype: Tensor[float32], description: Inverse document frequency for documents\nname: ql\\_jelinek\\_mercer\\_short, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1)\nname: ql\\_jelinek\\_mercer\\_long, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7)\nname: ql\\_dirichlet, dtype: Tensor[float32], description: Query likelihood score for documents using Dirichlet smoothing (lambda = 128)\nname: document\\_length, dtype: Tensor[int32], description: Length of documents\nname: title\\_length, dtype: Tensor[int32], description: Length of document titles\nname: abstract\\_length, dtype: Tensor[int32], description: Length of document abstracts\n\n\nExample PyTorch collate function\n--------------------------------\n\n\nEach sample in the dataset is a single query with multiple documents.\nThe following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:"
] | [
"TAGS\n#license-cc-by-nc-4.0 #arxiv-2207.03051 #region-us \n",
"### Click dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: query, dtype: List[int32], description: List of query tokens\nname: query\\_length, dtype: int32, description: Number of query tokens\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: url\\_md5, dtype: List[string], description: MD5 hash of document URL, most reliable document identifier\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: title, dtype: List[List[int32]], description: List of tokens for document titles\nname: abstract, dtype: List[List[int32]], description: List of tokens for document abstracts\nname: query\\_document\\_embedding, dtype: Tensor[Tensor[float16]], description: BERT CLS token\nname: click, dtype: Tensor[int32], description: Click / no click on a document\nname: position, dtype: Tensor[int32], description: Position in ranking (does not always match original item position)\nname: media\\_type, dtype: Tensor[int32], description: Document type (label encoding recommended as IDs do not occupy a continuous integer range)\nname: displayed\\_time, dtype: Tensor[float32], description: Seconds a document was displayed on the screen\nname: serp\\_height, dtype: Tensor[int32], description: Pixel height of a document on the screen\nname: slipoff\\_count\\_after\\_click, dtype: Tensor[int32], description: Number of times a document was scrolled off the screen after previously clicking on it\nname: bm25, dtype: Tensor[float32], description: BM25 score for documents\nname: bm25\\_title, dtype: Tensor[float32], description: BM25 score for document titles\nname: bm25\\_abstract, dtype: Tensor[float32], description: BM25 score for document abstracts\nname: tf\\_idf, dtype: Tensor[float32], description: TF-IDF score for documents\nname: tf, dtype: Tensor[float32], description: Term frequency for documents\nname: idf, dtype: Tensor[float32], description: Inverse document frequency for documents\nname: ql\\_jelinek\\_mercer\\_short, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1)\nname: ql\\_jelinek\\_mercer\\_long, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7)\nname: ql\\_dirichlet, dtype: Tensor[float32], description: Query likelihood score for documents using Dirichlet smoothing (lambda = 128)\nname: document\\_length, dtype: Tensor[int32], description: Length of documents\nname: title\\_length, dtype: Tensor[int32], description: Length of document titles\nname: abstract\\_length, dtype: Tensor[int32], description: Length of document abstracts",
"### Expert annotation dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: query, dtype: List[int32], description: List of query tokens\nname: query\\_length, dtype: int32, description: Number of query tokens\nname: frequency\\_bucket, dtype: int32, description: Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency)\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: url\\_md5, dtype: List[string], description: MD5 hash of document URL, most reliable document identifier\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: title, dtype: List[List[int32]], description: List of tokens for document titles\nname: abstract, dtype: List[List[int32]], description: List of tokens for document abstracts\nname: query\\_document\\_embedding, dtype: Tensor[Tensor[float16]], description: BERT CLS token\nname: label, dtype: Tensor[int32], description: Relevance judgments on a scale from 0 (bad) to 4 (excellent)\nname: bm25, dtype: Tensor[float32], description: BM25 score for documents\nname: bm25\\_title, dtype: Tensor[float32], description: BM25 score for document titles\nname: bm25\\_abstract, dtype: Tensor[float32], description: BM25 score for document abstracts\nname: tf\\_idf, dtype: Tensor[float32], description: TF-IDF score for documents\nname: tf, dtype: Tensor[float32], description: Term frequency for documents\nname: idf, dtype: Tensor[float32], description: Inverse document frequency for documents\nname: ql\\_jelinek\\_mercer\\_short, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1)\nname: ql\\_jelinek\\_mercer\\_long, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7)\nname: ql\\_dirichlet, dtype: Tensor[float32], description: Query likelihood score for documents using Dirichlet smoothing (lambda = 128)\nname: document\\_length, dtype: Tensor[int32], description: Length of documents\nname: title\\_length, dtype: Tensor[int32], description: Length of document titles\nname: abstract\\_length, dtype: Tensor[int32], description: Length of document abstracts\n\n\nExample PyTorch collate function\n--------------------------------\n\n\nEach sample in the dataset is a single query with multiple documents.\nThe following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:"
] | [
25,
852,
794
] | [
"passage: TAGS\n#license-cc-by-nc-4.0 #arxiv-2207.03051 #region-us \n",
"passage: ### Click dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: query, dtype: List[int32], description: List of query tokens\nname: query\\_length, dtype: int32, description: Number of query tokens\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: url\\_md5, dtype: List[string], description: MD5 hash of document URL, most reliable document identifier\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: title, dtype: List[List[int32]], description: List of tokens for document titles\nname: abstract, dtype: List[List[int32]], description: List of tokens for document abstracts\nname: query\\_document\\_embedding, dtype: Tensor[Tensor[float16]], description: BERT CLS token\nname: click, dtype: Tensor[int32], description: Click / no click on a document\nname: position, dtype: Tensor[int32], description: Position in ranking (does not always match original item position)\nname: media\\_type, dtype: Tensor[int32], description: Document type (label encoding recommended as IDs do not occupy a continuous integer range)\nname: displayed\\_time, dtype: Tensor[float32], description: Seconds a document was displayed on the screen\nname: serp\\_height, dtype: Tensor[int32], description: Pixel height of a document on the screen\nname: slipoff\\_count\\_after\\_click, dtype: Tensor[int32], description: Number of times a document was scrolled off the screen after previously clicking on it\nname: bm25, dtype: Tensor[float32], description: BM25 score for documents\nname: bm25\\_title, dtype: Tensor[float32], description: BM25 score for document titles\nname: bm25\\_abstract, dtype: Tensor[float32], description: BM25 score for document abstracts\nname: tf\\_idf, dtype: Tensor[float32], description: TF-IDF score for documents\nname: tf, dtype: Tensor[float32], description: Term frequency for documents\nname: idf, dtype: Tensor[float32], description: Inverse document frequency for documents\nname: ql\\_jelinek\\_mercer\\_short, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1)\nname: ql\\_jelinek\\_mercer\\_long, dtype: Tensor[float32], description: Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7)\nname: ql\\_dirichlet, dtype: Tensor[float32], description: Query likelihood score for documents using Dirichlet smoothing (lambda = 128)\nname: document\\_length, dtype: Tensor[int32], description: Length of documents\nname: title\\_length, dtype: Tensor[int32], description: Length of document titles\nname: abstract\\_length, dtype: Tensor[int32], description: Length of document abstracts"
] |
9af2415d4fd69d1d4becbfb3e608bf22e79d16b9 | # ImageNetVID dataset
## Usage
Please follow the command to use:
```
cat ILSVRC2015_VID.tar.gz.a* > ILSVRC2015_VID.tar.gz
cat ILSVRC2017_DET.tar.gz.a* > ILSVRC2017_DET.tar.gz
```
| guanxiongsun/imagenetvid | [
"region:us"
] | 2024-01-15T12:57:24+00:00 | {} | 2024-02-08T14:41:32+00:00 | [] | [] | TAGS
#region-us
| # ImageNetVID dataset
## Usage
Please follow the command to use:
| [
"# ImageNetVID dataset",
"## Usage\n\nPlease follow the command to use:"
] | [
"TAGS\n#region-us \n",
"# ImageNetVID dataset",
"## Usage\n\nPlease follow the command to use:"
] | [
6,
6,
10
] | [
"passage: TAGS\n#region-us \n# ImageNetVID dataset## Usage\n\nPlease follow the command to use:"
] |
b57132199ce3be351c170aff05f0beb81d85d390 | # Dataset Card for "mc_training_data_conversations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | baptistecolle/mc_training_data_conversations | [
"region:us"
] | 2024-01-15T12:58:24+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 226654.28048780488, "num_examples": 147}, {"name": "test", "num_bytes": 26211.719512195123, "num_examples": 17}], "download_size": 50566, "dataset_size": 252866.0}} | 2024-01-16T07:03:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "mc_training_data_conversations"
More Information needed | [
"# Dataset Card for \"mc_training_data_conversations\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"mc_training_data_conversations\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"mc_training_data_conversations\"\n\nMore Information needed"
] |
f955e7646aff998c625201cbad59c39e951d947c | # Synthetic Search Query Parsing Instruction for Instruct Falcon family
This is the version of [EmbeddingStudio/synthetic-search-queries dataset](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-queries) created the way to be aligned with [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) instruction format.
## Generation details
We used synthetically generated query parsing instructions:
* We generated lists of possible filters for 63 customer categories:
* [Raw version of filters dataset](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-filters-raw)
* [Split by representations](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-filters)
* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.
* For a given category and combination we [generated](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-queries) with GPT-4 Turbo:
* 2 search queries and theirs parsed version with unstructured parts.
* 2 search queries and theirs parsed version without unstructured part.
* Using filters, queries and parsed version we prepared [72.5k falcon format instruction](EmbeddingStudio/query-parsing-instructions-falcon)
**Warning:** EmbeddingStudio team aware you that generated queries **weren't enough curated**, and will be curated later once we finish our product market fit stage.
### Filters generation details
We used GPT-4 Turbo to generate several possible filters for 63 company categroies. For each filter we also generated some possible representations. For examples filter `Date` can be represented as `dd/mm/YYYY`, `YYYY-mm-dd`, as words `2024 Jan 17`, etc.
### Queries generation details
We also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were:
* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter
* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.
* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern
### Instructions generation details
For the generation instructions we used following ideas:
1. Zero-Shot query parser should be schema agnostic. Cases like `snake_case, CamelCase, http-headers-like` should not ruin generation process.
2. Zero-Shot query parser should be spelling errors insensitive.
3. Training instructions should be in the following order:
* Category
* Schema
* Query
So LLM can be used in the following way: just generate embedding of category -> schema part, so inference will be faster.
We assume, that `schema agnostic` termin means something wider, like to be able to work not only with JSONs, but also with HTML, Markdown, YAML, etc. We are working on it.
So, what was our approach as an attempt to achieve these abilities:
1. For each query we generated a version with a mistake
2. Passed to each parsed version an additional field `Correct`, which contains a corrected version of a search query.
3. For each query we randomly selected and used a case for schema fields and a case for filter and representation names.
4. For each query we additionally generated two instuctions:
* Where did we remove from a provided schema and parsed version one filter
* Where did we remove from a provided schema and parsed version all related filters
**Warning:** EmbeddingStudio team ask you to curate datasets on your own precisely.
## Instruction format
```markdown
### System: Master in Query Analysis
### Instruction: Organize queries in JSON, adhere to schema, verify spelling.
#### Category: {your_company_category}
#### Schema: ```{filters_schema}```
#### Query: {query}
### Response:
```
Filters schema is JSON-readable line in the format (we highly recommend you to use it):
List of filters (dict):
* Name - name of filter (better to be meaningful).
* Representations - list of possible filter formats (dict):
* Name - name of representation (better to be meaningful).
* Type - python base type (int, float, str, bool).
* Examples - list of examples.
* Enum - if a representation is enumeration, provide a list of possible values, LLM should map parsed value into this list.
* Pattern - if a representation is pattern-like (datetime, regexp, etc.) provide a pattern text in any format.
Example:
```json
[{"Name": "Customer_Ratings", "Representations": [{"Name": "Exact_Rating", "Type": "float", "Examples": [4.5, 3.2, 5.0, "4.5", "Unstructured"]}, {"Name": "Minimum_Rating", "Type": "float", "Examples": [4.0, 3.0, 5.0, "4.5"]}, {"Name": "Star_Rating", "Type": "int", "Examples": [4, 3, 5], "Enum": [1, 2, 3, 4, 5]}]}, {"Name": "Date", "Representations": [{"Name": "Day_Month_Year", "Type": "str", "Examples": ["01.01.2024", "15.06.2023", "31.12.2022", "25.12.2021", "20.07.2024", "15.06.2023"], "Pattern": "dd.mm.YYYY"}, {"Name": "Day_Name", "Type": "str", "Examples": ["Monday", "Wednesday", "Friday", "Thursday", "Monday", "Tuesday"], "Enum": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]}]}, {"Name": "Date_Period", "Representations": [{"Name": "Specific_Period", "Type": "str", "Examples": ["01.01.2024 - 31.01.2024", "01.06.2023 - 30.06.2023", "01.12.2022 - 31.12.2022"], "Pattern": "dd.mm.YYYY - dd.mm.YYYY"}, {"Name": "Month", "Type": "str", "Examples": ["January", "June", "December"], "Enum": ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]}, {"Name": "Quarter", "Type": "str", "Examples": ["Q1", "Q2", "Q3"], "Enum": ["Q1", "Q2", "Q3", "Q4"]}, {"Name": "Season", "Type": "str", "Examples": ["Winter", "Summer", "Autumn"], "Enum": ["Winter", "Spring", "Summer", "Autumn"]}]}, {"Name": "Destination_Country", "Representations": [{"Name": "Country_Name", "Type": "str", "Examples": ["United States", "Germany", "China"]}, {"Name": "Country_Code", "Type": "str", "Examples": ["US", "DE", "CN"]}, {"Name": "Country_Abbreviation", "Type": "str", "Examples": ["USA", "GER", "CHN"]}]}]
```
As the result, response will be JSON-readable line in the format:
```json
[{"Value": "Corrected search phrase", "Name": "Correct"}, {"Name": "filter-name.representation", "Value": "some-value"}]
```
Field and representation names will be aligned with the provided schema. Example:
```json
[{"Value": "Which logistics companies in the US have a perfect 5.0 rating?", "Name": "Correct"}, {"Name": "Customer_Ratings.Exact_Rating", "Value": 5.0}, {"Name": "Destination_Country.Country_Code", "Value": "US"}]
```
Used for fine-tuning `system` phrases:
```python
[
"Expert at Deconstructing Search Queries",
"Master in Query Analysis",
"Premier Search Query Interpreter",
"Advanced Search Query Decoder",
"Search Query Parsing Genius",
"Search Query Parsing Wizard",
"Unrivaled Query Parsing Mechanism",
"Search Query Parsing Virtuoso",
"Query Parsing Maestro",
"Ace of Search Query Structuring"
]
```
Used for fine-tuning `instruction` phrases:
```python
[
"Convert queries to JSON, align with schema, ensure correct spelling.",
"Analyze and structure queries in JSON, maintain schema, check spelling.",
"Organize queries in JSON, adhere to schema, verify spelling.",
"Decode queries to JSON, follow schema, correct spelling.",
"Parse queries to JSON, match schema, spell correctly.",
"Transform queries to structured JSON, align with schema and spelling.",
"Restructure queries in JSON, comply with schema, accurate spelling.",
"Rearrange queries in JSON, strict schema adherence, maintain spelling.",
"Harmonize queries with JSON schema, ensure spelling accuracy.",
"Efficient JSON conversion of queries, schema compliance, correct spelling."
]
```
## Train/test splitting principles
As we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:
* Ability to work well with unseen domain
* Ability to work well with unseen filters
* Ability to work well with unseen queries
For these purposes we:
1. We put into test split 5 categories, completely separared from train: `Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing`.
2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.
3. Selected 5% of other queries and put it into test.
## How to use it
```python
from datasets import load_dataset
queries_dataset = load_dataset('EmbeddingStudio/query-parsing-instructions-falcon')
``` | EmbeddingStudio/query-parsing-instructions-falcon | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"falcon",
"instuct",
"zero-shot",
"query parsing",
"synthetic",
"search-queries",
"e-commerce",
"online-shops",
"travel-agencies",
"educational-institutions-ai",
"job-recruitment-automation",
"banking-digital-services",
"investment-ai-analysis",
"insurance-tech-innovation",
"financial-advisory-ai",
"credit-services-automation",
"payment-processing-tech",
"mortgage-tech-solutions",
"real-estate-digital-solutions",
"taxation-tech-services",
"risk-management-ai",
"compliance-automation",
"digital-banking-innovation",
"mobile-banking-tech",
"online-retail-tech",
"offline-retail-automation",
"automotive-dealership-tech",
"restaurant-automation-tech",
"food-delivery-ai",
"entertainment-platforms-ai",
"media-platforms-tech",
"government-services-automation",
"travel-tech-innovation",
"consumer-analytics-ai",
"logistics-tech-automation",
"supply-chain-ai",
"customer-support-tech",
"market-research-ai",
"mobile-app-dev-tech",
"game-dev-ai",
"cloud-computing-services",
"data-analytics-ai",
"business-intelligence-ai",
"cybersecurity-software-tech",
"ui-ux-design-ai",
"iot-development-tech",
"project-management-tools-ai",
"version-control-systems-tech",
"ci-cd-automation",
"issue-tracking-ai",
"bug-reporting-automation",
"collaborative-dev-environments",
"team-communication-tech",
"task-time-management-ai",
"customer-feedback-ai",
"cloud-based-dev-tech",
"image-stock-platforms-ai",
"video-hosting-tech",
"social-networks-ai",
"professional-social-networks-ai",
"dating-apps-tech",
"region:us"
] | 2024-01-15T13:06:08+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text2text-generation"], "pretty_name": "Search Query Parsing Instruction for Instruct Falcon family", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 102322804, "num_examples": 53760}, {"name": "test", "num_bytes": 35586933, "num_examples": 18710}], "download_size": 26565908, "dataset_size": 137909737}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["falcon", "instuct", "zero-shot", "query parsing", "synthetic", "search-queries", "e-commerce", "online-shops", "travel-agencies", "educational-institutions-ai", "job-recruitment-automation", "banking-digital-services", "investment-ai-analysis", "insurance-tech-innovation", "financial-advisory-ai", "credit-services-automation", "payment-processing-tech", "mortgage-tech-solutions", "real-estate-digital-solutions", "taxation-tech-services", "risk-management-ai", "compliance-automation", "digital-banking-innovation", "mobile-banking-tech", "online-retail-tech", "offline-retail-automation", "automotive-dealership-tech", "restaurant-automation-tech", "food-delivery-ai", "entertainment-platforms-ai", "media-platforms-tech", "government-services-automation", "travel-tech-innovation", "consumer-analytics-ai", "logistics-tech-automation", "supply-chain-ai", "customer-support-tech", "market-research-ai", "mobile-app-dev-tech", "game-dev-ai", "cloud-computing-services", "data-analytics-ai", "business-intelligence-ai", "cybersecurity-software-tech", "ui-ux-design-ai", "iot-development-tech", "project-management-tools-ai", "version-control-systems-tech", "ci-cd-automation", "issue-tracking-ai", "bug-reporting-automation", "collaborative-dev-environments", "team-communication-tech", "task-time-management-ai", "customer-feedback-ai", "cloud-based-dev-tech", "image-stock-platforms-ai", "video-hosting-tech", "social-networks-ai", "professional-social-networks-ai", "dating-apps-tech"]} | 2024-02-01T12:06:20+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #falcon #instuct #zero-shot #query parsing #synthetic #search-queries #e-commerce #online-shops #travel-agencies #educational-institutions-ai #job-recruitment-automation #banking-digital-services #investment-ai-analysis #insurance-tech-innovation #financial-advisory-ai #credit-services-automation #payment-processing-tech #mortgage-tech-solutions #real-estate-digital-solutions #taxation-tech-services #risk-management-ai #compliance-automation #digital-banking-innovation #mobile-banking-tech #online-retail-tech #offline-retail-automation #automotive-dealership-tech #restaurant-automation-tech #food-delivery-ai #entertainment-platforms-ai #media-platforms-tech #government-services-automation #travel-tech-innovation #consumer-analytics-ai #logistics-tech-automation #supply-chain-ai #customer-support-tech #market-research-ai #mobile-app-dev-tech #game-dev-ai #cloud-computing-services #data-analytics-ai #business-intelligence-ai #cybersecurity-software-tech #ui-ux-design-ai #iot-development-tech #project-management-tools-ai #version-control-systems-tech #ci-cd-automation #issue-tracking-ai #bug-reporting-automation #collaborative-dev-environments #team-communication-tech #task-time-management-ai #customer-feedback-ai #cloud-based-dev-tech #image-stock-platforms-ai #video-hosting-tech #social-networks-ai #professional-social-networks-ai #dating-apps-tech #region-us
| # Synthetic Search Query Parsing Instruction for Instruct Falcon family
This is the version of EmbeddingStudio/synthetic-search-queries dataset created the way to be aligned with Falcon-7B-Instruct instruction format.
## Generation details
We used synthetically generated query parsing instructions:
* We generated lists of possible filters for 63 customer categories:
* Raw version of filters dataset
* Split by representations
* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.
* For a given category and combination we generated with GPT-4 Turbo:
* 2 search queries and theirs parsed version with unstructured parts.
* 2 search queries and theirs parsed version without unstructured part.
* Using filters, queries and parsed version we prepared 72.5k falcon format instruction
Warning: EmbeddingStudio team aware you that generated queries weren't enough curated, and will be curated later once we finish our product market fit stage.
### Filters generation details
We used GPT-4 Turbo to generate several possible filters for 63 company categroies. For each filter we also generated some possible representations. For examples filter 'Date' can be represented as 'dd/mm/YYYY', 'YYYY-mm-dd', as words '2024 Jan 17', etc.
### Queries generation details
We also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were:
* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter
* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.
* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern
### Instructions generation details
For the generation instructions we used following ideas:
1. Zero-Shot query parser should be schema agnostic. Cases like 'snake_case, CamelCase, http-headers-like' should not ruin generation process.
2. Zero-Shot query parser should be spelling errors insensitive.
3. Training instructions should be in the following order:
* Category
* Schema
* Query
So LLM can be used in the following way: just generate embedding of category -> schema part, so inference will be faster.
We assume, that 'schema agnostic' termin means something wider, like to be able to work not only with JSONs, but also with HTML, Markdown, YAML, etc. We are working on it.
So, what was our approach as an attempt to achieve these abilities:
1. For each query we generated a version with a mistake
2. Passed to each parsed version an additional field 'Correct', which contains a corrected version of a search query.
3. For each query we randomly selected and used a case for schema fields and a case for filter and representation names.
4. For each query we additionally generated two instuctions:
* Where did we remove from a provided schema and parsed version one filter
* Where did we remove from a provided schema and parsed version all related filters
Warning: EmbeddingStudio team ask you to curate datasets on your own precisely.
## Instruction format
{filters_schema}
Filters schema is JSON-readable line in the format (we highly recommend you to use it):
List of filters (dict):
* Name - name of filter (better to be meaningful).
* Representations - list of possible filter formats (dict):
* Name - name of representation (better to be meaningful).
* Type - python base type (int, float, str, bool).
* Examples - list of examples.
* Enum - if a representation is enumeration, provide a list of possible values, LLM should map parsed value into this list.
* Pattern - if a representation is pattern-like (datetime, regexp, etc.) provide a pattern text in any format.
Example:
As the result, response will be JSON-readable line in the format:
Field and representation names will be aligned with the provided schema. Example:
Used for fine-tuning 'system' phrases:
Used for fine-tuning 'instruction' phrases:
## Train/test splitting principles
As we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:
* Ability to work well with unseen domain
* Ability to work well with unseen filters
* Ability to work well with unseen queries
For these purposes we:
1. We put into test split 5 categories, completely separared from train: 'Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing'.
2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.
3. Selected 5% of other queries and put it into test.
## How to use it
| [
"# Synthetic Search Query Parsing Instruction for Instruct Falcon family\n\nThis is the version of EmbeddingStudio/synthetic-search-queries dataset created the way to be aligned with Falcon-7B-Instruct instruction format.",
"## Generation details\n\nWe used synthetically generated query parsing instructions:\n* We generated lists of possible filters for 63 customer categories: \n * Raw version of filters dataset\n * Split by representations\n* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.\n* For a given category and combination we generated with GPT-4 Turbo:\n * 2 search queries and theirs parsed version with unstructured parts.\n * 2 search queries and theirs parsed version without unstructured part. \n* Using filters, queries and parsed version we prepared 72.5k falcon format instruction\n\nWarning: EmbeddingStudio team aware you that generated queries weren't enough curated, and will be curated later once we finish our product market fit stage.",
"### Filters generation details\n\nWe used GPT-4 Turbo to generate several possible filters for 63 company categroies. For each filter we also generated some possible representations. For examples filter 'Date' can be represented as 'dd/mm/YYYY', 'YYYY-mm-dd', as words '2024 Jan 17', etc.",
"### Queries generation details\n\nWe also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were: \n* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter \n* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.\n* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern",
"### Instructions generation details\n\nFor the generation instructions we used following ideas:\n1. Zero-Shot query parser should be schema agnostic. Cases like 'snake_case, CamelCase, http-headers-like' should not ruin generation process. \n2. Zero-Shot query parser should be spelling errors insensitive.\n3. Training instructions should be in the following order:\n * Category\n * Schema\n * Query\n \n So LLM can be used in the following way: just generate embedding of category -> schema part, so inference will be faster.\n\nWe assume, that 'schema agnostic' termin means something wider, like to be able to work not only with JSONs, but also with HTML, Markdown, YAML, etc. We are working on it.\n\nSo, what was our approach as an attempt to achieve these abilities:\n1. For each query we generated a version with a mistake\n2. Passed to each parsed version an additional field 'Correct', which contains a corrected version of a search query.\n3. For each query we randomly selected and used a case for schema fields and a case for filter and representation names.\n4. For each query we additionally generated two instuctions:\n * Where did we remove from a provided schema and parsed version one filter\n * Where did we remove from a provided schema and parsed version all related filters\n\nWarning: EmbeddingStudio team ask you to curate datasets on your own precisely.",
"## Instruction format\n\n{filters_schema}\n\nFilters schema is JSON-readable line in the format (we highly recommend you to use it):\nList of filters (dict):\n* Name - name of filter (better to be meaningful).\n* Representations - list of possible filter formats (dict):\n * Name - name of representation (better to be meaningful).\n * Type - python base type (int, float, str, bool).\n * Examples - list of examples.\n * Enum - if a representation is enumeration, provide a list of possible values, LLM should map parsed value into this list.\n * Pattern - if a representation is pattern-like (datetime, regexp, etc.) provide a pattern text in any format.\n\nExample:\n\n\nAs the result, response will be JSON-readable line in the format:\n\n\nField and representation names will be aligned with the provided schema. Example:\n\n\n\nUsed for fine-tuning 'system' phrases:\n\n\nUsed for fine-tuning 'instruction' phrases:",
"## Train/test splitting principles\n\nAs we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:\n* Ability to work well with unseen domain\n* Ability to work well with unseen filters\n* Ability to work well with unseen queries\n\nFor these purposes we:\n1. We put into test split 5 categories, completely separared from train: 'Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing'.\n2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.\n3. Selected 5% of other queries and put it into test.",
"## How to use it"
] | [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #falcon #instuct #zero-shot #query parsing #synthetic #search-queries #e-commerce #online-shops #travel-agencies #educational-institutions-ai #job-recruitment-automation #banking-digital-services #investment-ai-analysis #insurance-tech-innovation #financial-advisory-ai #credit-services-automation #payment-processing-tech #mortgage-tech-solutions #real-estate-digital-solutions #taxation-tech-services #risk-management-ai #compliance-automation #digital-banking-innovation #mobile-banking-tech #online-retail-tech #offline-retail-automation #automotive-dealership-tech #restaurant-automation-tech #food-delivery-ai #entertainment-platforms-ai #media-platforms-tech #government-services-automation #travel-tech-innovation #consumer-analytics-ai #logistics-tech-automation #supply-chain-ai #customer-support-tech #market-research-ai #mobile-app-dev-tech #game-dev-ai #cloud-computing-services #data-analytics-ai #business-intelligence-ai #cybersecurity-software-tech #ui-ux-design-ai #iot-development-tech #project-management-tools-ai #version-control-systems-tech #ci-cd-automation #issue-tracking-ai #bug-reporting-automation #collaborative-dev-environments #team-communication-tech #task-time-management-ai #customer-feedback-ai #cloud-based-dev-tech #image-stock-platforms-ai #video-hosting-tech #social-networks-ai #professional-social-networks-ai #dating-apps-tech #region-us \n",
"# Synthetic Search Query Parsing Instruction for Instruct Falcon family\n\nThis is the version of EmbeddingStudio/synthetic-search-queries dataset created the way to be aligned with Falcon-7B-Instruct instruction format.",
"## Generation details\n\nWe used synthetically generated query parsing instructions:\n* We generated lists of possible filters for 63 customer categories: \n * Raw version of filters dataset\n * Split by representations\n* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.\n* For a given category and combination we generated with GPT-4 Turbo:\n * 2 search queries and theirs parsed version with unstructured parts.\n * 2 search queries and theirs parsed version without unstructured part. \n* Using filters, queries and parsed version we prepared 72.5k falcon format instruction\n\nWarning: EmbeddingStudio team aware you that generated queries weren't enough curated, and will be curated later once we finish our product market fit stage.",
"### Filters generation details\n\nWe used GPT-4 Turbo to generate several possible filters for 63 company categroies. For each filter we also generated some possible representations. For examples filter 'Date' can be represented as 'dd/mm/YYYY', 'YYYY-mm-dd', as words '2024 Jan 17', etc.",
"### Queries generation details\n\nWe also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were: \n* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter \n* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.\n* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern",
"### Instructions generation details\n\nFor the generation instructions we used following ideas:\n1. Zero-Shot query parser should be schema agnostic. Cases like 'snake_case, CamelCase, http-headers-like' should not ruin generation process. \n2. Zero-Shot query parser should be spelling errors insensitive.\n3. Training instructions should be in the following order:\n * Category\n * Schema\n * Query\n \n So LLM can be used in the following way: just generate embedding of category -> schema part, so inference will be faster.\n\nWe assume, that 'schema agnostic' termin means something wider, like to be able to work not only with JSONs, but also with HTML, Markdown, YAML, etc. We are working on it.\n\nSo, what was our approach as an attempt to achieve these abilities:\n1. For each query we generated a version with a mistake\n2. Passed to each parsed version an additional field 'Correct', which contains a corrected version of a search query.\n3. For each query we randomly selected and used a case for schema fields and a case for filter and representation names.\n4. For each query we additionally generated two instuctions:\n * Where did we remove from a provided schema and parsed version one filter\n * Where did we remove from a provided schema and parsed version all related filters\n\nWarning: EmbeddingStudio team ask you to curate datasets on your own precisely.",
"## Instruction format\n\n{filters_schema}\n\nFilters schema is JSON-readable line in the format (we highly recommend you to use it):\nList of filters (dict):\n* Name - name of filter (better to be meaningful).\n* Representations - list of possible filter formats (dict):\n * Name - name of representation (better to be meaningful).\n * Type - python base type (int, float, str, bool).\n * Examples - list of examples.\n * Enum - if a representation is enumeration, provide a list of possible values, LLM should map parsed value into this list.\n * Pattern - if a representation is pattern-like (datetime, regexp, etc.) provide a pattern text in any format.\n\nExample:\n\n\nAs the result, response will be JSON-readable line in the format:\n\n\nField and representation names will be aligned with the provided schema. Example:\n\n\n\nUsed for fine-tuning 'system' phrases:\n\n\nUsed for fine-tuning 'instruction' phrases:",
"## Train/test splitting principles\n\nAs we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:\n* Ability to work well with unseen domain\n* Ability to work well with unseen filters\n* Ability to work well with unseen queries\n\nFor these purposes we:\n1. We put into test split 5 categories, completely separared from train: 'Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing'.\n2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.\n3. Selected 5% of other queries and put it into test.",
"## How to use it"
] | [
504,
55,
191,
78,
106,
327,
232,
159,
5
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #falcon #instuct #zero-shot #query parsing #synthetic #search-queries #e-commerce #online-shops #travel-agencies #educational-institutions-ai #job-recruitment-automation #banking-digital-services #investment-ai-analysis #insurance-tech-innovation #financial-advisory-ai #credit-services-automation #payment-processing-tech #mortgage-tech-solutions #real-estate-digital-solutions #taxation-tech-services #risk-management-ai #compliance-automation #digital-banking-innovation #mobile-banking-tech #online-retail-tech #offline-retail-automation #automotive-dealership-tech #restaurant-automation-tech #food-delivery-ai #entertainment-platforms-ai #media-platforms-tech #government-services-automation #travel-tech-innovation #consumer-analytics-ai #logistics-tech-automation #supply-chain-ai #customer-support-tech #market-research-ai #mobile-app-dev-tech #game-dev-ai #cloud-computing-services #data-analytics-ai #business-intelligence-ai #cybersecurity-software-tech #ui-ux-design-ai #iot-development-tech #project-management-tools-ai #version-control-systems-tech #ci-cd-automation #issue-tracking-ai #bug-reporting-automation #collaborative-dev-environments #team-communication-tech #task-time-management-ai #customer-feedback-ai #cloud-based-dev-tech #image-stock-platforms-ai #video-hosting-tech #social-networks-ai #professional-social-networks-ai #dating-apps-tech #region-us \n",
"passage: # Synthetic Search Query Parsing Instruction for Instruct Falcon family\n\nThis is the version of EmbeddingStudio/synthetic-search-queries dataset created the way to be aligned with Falcon-7B-Instruct instruction format.## Generation details\n\nWe used synthetically generated query parsing instructions:\n* We generated lists of possible filters for 63 customer categories: \n * Raw version of filters dataset\n * Split by representations\n* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.\n* For a given category and combination we generated with GPT-4 Turbo:\n * 2 search queries and theirs parsed version with unstructured parts.\n * 2 search queries and theirs parsed version without unstructured part. \n* Using filters, queries and parsed version we prepared 72.5k falcon format instruction\n\nWarning: EmbeddingStudio team aware you that generated queries weren't enough curated, and will be curated later once we finish our product market fit stage.### Filters generation details\n\nWe used GPT-4 Turbo to generate several possible filters for 63 company categroies. For each filter we also generated some possible representations. For examples filter 'Date' can be represented as 'dd/mm/YYYY', 'YYYY-mm-dd', as words '2024 Jan 17', etc.### Queries generation details\n\nWe also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were: \n* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter \n* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.\n* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern"
] |
410e300643a2fbf8ee7b64ed3a64838f5f130ab9 |
# Dataset Card for fgan-annotate-dataset
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("aaronemmanuel/fgan-annotate-dataset")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("aaronemmanuel/fgan-annotate-dataset")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| background | Background | text | True | False |
| prompt | Prompt | text | True | False |
| response | Final Response | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| response_correction | Response_correction | text | True | N/A | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"background": "Background: ITU has published Use cases for Autonomous Networks. ITU focus group on autonomous networks studies the use cases and ITU-T SG13 has published many use cases on autonomous networks. These use cases are categorised into two main categories depending on whether they are related to application of autonomous networks or related to the core concepts of autonomous networks.",
"prompt": "\u003chuman\u003e: who publishes use cases for autonomous networks?",
"response": "\u003cbot\u003e: ITU publishes use cases for autonomous networks based on the work of ITU focus group on autonomous networks and ITU-T SG13."
},
"metadata": {},
"responses": [],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"background": "Background: ITU has published Use cases for Autonomous Networks. ITU focus group on autonomous networks studies the use cases and ITU-T SG13 has published many use cases on autonomous networks. These use cases are categorised into two main categories depending on whether they are related to application of autonomous networks or related to the core concepts of autonomous networks.",
"external_id": null,
"metadata": "{}",
"prompt": "\u003chuman\u003e: who publishes use cases for autonomous networks?",
"response": "\u003cbot\u003e: ITU publishes use cases for autonomous networks based on the work of ITU focus group on autonomous networks and ITU-T SG13.",
"response_correction": [],
"response_correction-suggestion": null,
"response_correction-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **background** is of type `text`.
* **prompt** is of type `text`.
* **response** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **response_correction** is of type `text`.
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **response_correction-suggestion** is of type `text`.
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Please, read the question carefully and try to answer it as accurately as possible.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | aaronemmanuel/fgan-annotate-dataset | [
"size_categories:n<1K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2024-01-15T13:07:13+00:00 | {"size_categories": "n<1K", "tags": ["rlfh", "argilla", "human-feedback"]} | 2024-01-15T13:42:02+00:00 | [] | [] | TAGS
#size_categories-n<1K #rlfh #argilla #human-feedback #region-us
| Dataset Card for fgan-annotate-dataset
======================================
This dataset has been created with Argilla.
As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'.
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla.
* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'.
* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:
### Load with 'datasets'
To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:
### Supported Tasks and Leaderboards
This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.
There are no leaderboards associated with this dataset.
### Languages
Dataset Structure
-----------------
### Data in Argilla
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking.
The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
While the same record in HuggingFace 'datasets' looks as follows:
### Data Fields
Among the dataset fields, we differentiate between the following:
* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
+ background is of type 'text'.
+ prompt is of type 'text'.
+ response is of type 'text'.
* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.
+ response\_correction is of type 'text'.
* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
+ (optional) response\_correction-suggestion is of type 'text'.
Additionally, we also have two more fields that are optional and are the following:
* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
* external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is 'train'.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation guidelines
Please, read the question carefully and try to answer it as accurately as possible.
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ background is of type 'text'.\n\t+ prompt is of type 'text'.\n\t+ response is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ response\\_correction is of type 'text'.\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) response\\_correction-suggestion is of type 'text'.\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines\n\n\nPlease, read the question carefully and try to answer it as accurately as possible.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ background is of type 'text'.\n\t+ prompt is of type 'text'.\n\t+ response is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ response\\_correction is of type 'text'.\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) response\\_correction-suggestion is of type 'text'.\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines\n\n\nPlease, read the question carefully and try to answer it as accurately as possible.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
27,
162,
40,
53,
68,
11,
404,
40,
464,
27,
7,
4,
10,
10,
5,
22,
5,
9,
18,
7,
8,
14,
6,
6,
5
] | [
"passage: TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.### Languages\n\n\nDataset Structure\n-----------------",
"passage: ### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ background is of type 'text'.\n\t+ prompt is of type 'text'.\n\t+ response is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ response\\_correction is of type 'text'.\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) response\\_correction-suggestion is of type 'text'.\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file."
] |
6c8ecfd766a0c54a0c7368f30bfce1e8dd7461ef |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | Zuylele/reddit-posts | [
"region:us"
] | 2024-01-15T13:32:30+00:00 | {} | 2024-01-19T07:42:33+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
e66321c18329d82defb90bc26a4bf9c01e41a814 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | beelzeebuub/FJ-flagging | [
"region:us"
] | 2024-01-15T13:33:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]} | 2024-01-15T13:49:42+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
8,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
cdf63f36dbe1b7d43e78d6bcd0f1452d9cc16407 |
# Japanese Wikipedia Human Retrieval dataset
This is a Japanese question answereing dataset with retrieval on Wikipedia articles
by trained human workers.
## About the dataset
Each entry represents a single QA session:
given a question sentence, the responsible worker tried to search for the appropriate
information from Wikipedia using the search box and/or inner hyperlinks, and constructed
the answer paragraphs according to the search results.
The whole process of each retrieval is recorded manually by the same worker.
All sessions are processed from 2023-12-04 to 2023-12-25 with access to Japanese Wikipedia
(http://ja.wikipedia.org/).
The dataset consists of following data:
* A question sentence
* The final answer paragraph (whole sentence, and chunks with citations)
* List of references with either extracted paragraph or summarization from a Wikipedia
article
## Target situation and limitation
We designed this dataset to ensure that the answers reflect only exact information written in the cited references,
and does not reflect any external information and/or implicit knowledges.
This design is useful to measure/investigate QA tasks with accurate retrieval from the given data source.
Please keep in mind that the dataset is not designed to provide a QA with correct information.
We requested the workers strictly to answer the questions based on only explicit citation from Wikipedia.
That means, the workers should write answers that may be different from their implicit knowledge,
and should leave the answer empty if they couldn't find any information from Wikipedia
even if they know something to answer the questions.
# Dataset chunks
As well as successful sessions with answer paragraphs, we also recorded failed sessions:
the worker failed to construct the answer from the search results.
In this case we recorded at least the retrieval process despite lack of the answer.
We release this version of the dataset with the following dataset chunks:
* "answered" chunk (838 examples): question, answer, and retrieval process
* "not_answered" chunk (433 examples): question and retrieval process (no answer)
## Data structure
Each entry has the following schema:
```js
{
"id": number, // Entry ID
"question": string, // Question sentence
// Answer section
// Absense of this field means that the worker failed to answer the question.
"answer": {
"text": string, // Answer paragraph
// Answer sentences
// These sentences are written by the workers based on the cited references.
// The above answer paragraph is generated by joining all texts in this list.
"sentences": [
{
"text": string, // Answer sentence
"citations": number[], // List of reference IDs for citation
}
],
},
// Reference list
"references": [
{
// Either "search" or "link" field exists.
// Information for direct search (search box on Wikipedia)
"search": {
"keywords": string[], // List of words input into the search box
},
// Information for hyperlinks
"link": {
"referrer": number, // The reference ID at which the worker clicked the hyperlink
}
// Either "page" or "not_found" field exists.
// Extracted content
"page": {
"title": string, // Title of the Wikipedia article
"url": string, // URL of the Wikipedia article
// Either "quote" or "summary" field exists.
// Absense of both field means that the page doesn't contain appropriate data.
// Information for direct quotation
// There could be multiple "page" fields with "quote" subfield if multiple
// sentences are extracted from distant positions in the same page.
"quote": {
"text": string, // Consecutive texts extracted from the article
},
// Information for summarization
"summary": {
"text": string, // Summary text about the page written by the worker.
"method": string, // Description about how the worker wrote the summary.
}
}
// Search result (not found)
"not_found": {
"url": string, // URL of the Wikipedia search results
}
},
],
}
```
Example ("answered" data ID=1):
```json
{
"id": 1,
"question": "経済産業省の役割について知りたい。",
"answer": {
"text": "経済産業省は、日本の行政機関のひとつです。経済および産業の発展ならびに鉱物資源およびエネルギー資源の供給に関する行政を所管しています。民間の経済活力の向上及び対外経済関係の円滑な発展を中心とする経済及び産業の発展並びに鉱物資源及びエネルギー資源の安定的かつ効率的な供給の確保を図るために、マクロ経済政策、産業政策、通商政策、貿易管理業務、産業技術政策、流通政策、エネルギー政策などを所管しています。",
"sentences": [
{
"text": "経済産業省は、日本の行政機関のひとつです。",
"citations": [
0
]
},
{
"text": "経済および産業の発展ならびに鉱物資源およびエネルギー資源の供給に関する行政を所管しています。",
"citations": [
0
]
},
{
"text": "民間の経済活力の向上及び対外経済関係の円滑な発展を中心とする経済及び産業の発展並びに鉱物資源及びエネルギー資源の安定的かつ効率的な供給の確保を図るために、マクロ経済政策、産業政策、通商政策、貿易管理業務、産業技術政策、流通政策、エネルギー政策などを所管しています。",
"citations": [
1
]
}
]
},
"references": [
{
"search": {
"keywords": [
"経済産業省"
]
},
"page": {
"title": "経済産業省",
"url": "https://ja.wikipedia.org/wiki/%E7%B5%8C%E6%B8%88%E7%94%A3%E6%A5%AD%E7%9C%81",
"quote": {
"text": "経済産業省(けいざいさんぎょうしょう、英: Ministry of Economy, Trade and Industry、略称: METI)は、日本の行政機関のひとつ[4]。経済および産業の発展ならびに鉱物資源およびエネルギー資源の供給に関する行政を所管する[注釈 1]。"
}
}
},
{
"search": {
"keywords": [
"経済産業省"
]
},
"page": {
"title": "経済産業省",
"url": "https://ja.wikipedia.org/wiki/%E7%B5%8C%E6%B8%88%E7%94%A3%E6%A5%AD%E7%9C%81",
"quote": {
"text": "経済産業省設置法第3条の定める任務である「民間の経済活力の向上及び対外経済関係の円滑な発展を中心とする経済及び産業の発展並びに鉱物資源及びエネルギー資源の安定的かつ効率的な供給の確保を図ること」を達成するため、マクロ経済政策、産業政策、通商政策、貿易管理業務、産業技術政策、流通政策、エネルギー政策などを所管する。"
}
}
}
]
}
```
## Corresponding author
* Yusuke Oda (@odashi on GitHub) | baobab-trees/wikipedia-human-retrieval-ja | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:ja",
"license:apache-2.0",
"region:us"
] | 2024-01-15T13:52:30+00:00 | {"language": ["ja"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]} | 2024-01-18T05:22:23+00:00 | [] | [
"ja"
] | TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Japanese #license-apache-2.0 #region-us
|
# Japanese Wikipedia Human Retrieval dataset
This is a Japanese question answereing dataset with retrieval on Wikipedia articles
by trained human workers.
## About the dataset
Each entry represents a single QA session:
given a question sentence, the responsible worker tried to search for the appropriate
information from Wikipedia using the search box and/or inner hyperlinks, and constructed
the answer paragraphs according to the search results.
The whole process of each retrieval is recorded manually by the same worker.
All sessions are processed from 2023-12-04 to 2023-12-25 with access to Japanese Wikipedia
(URL
The dataset consists of following data:
* A question sentence
* The final answer paragraph (whole sentence, and chunks with citations)
* List of references with either extracted paragraph or summarization from a Wikipedia
article
## Target situation and limitation
We designed this dataset to ensure that the answers reflect only exact information written in the cited references,
and does not reflect any external information and/or implicit knowledges.
This design is useful to measure/investigate QA tasks with accurate retrieval from the given data source.
Please keep in mind that the dataset is not designed to provide a QA with correct information.
We requested the workers strictly to answer the questions based on only explicit citation from Wikipedia.
That means, the workers should write answers that may be different from their implicit knowledge,
and should leave the answer empty if they couldn't find any information from Wikipedia
even if they know something to answer the questions.
# Dataset chunks
As well as successful sessions with answer paragraphs, we also recorded failed sessions:
the worker failed to construct the answer from the search results.
In this case we recorded at least the retrieval process despite lack of the answer.
We release this version of the dataset with the following dataset chunks:
* "answered" chunk (838 examples): question, answer, and retrieval process
* "not_answered" chunk (433 examples): question and retrieval process (no answer)
## Data structure
Each entry has the following schema:
Example ("answered" data ID=1):
## Corresponding author
* Yusuke Oda (@odashi on GitHub) | [
"# Japanese Wikipedia Human Retrieval dataset\n\nThis is a Japanese question answereing dataset with retrieval on Wikipedia articles\nby trained human workers.",
"## About the dataset\n\nEach entry represents a single QA session:\ngiven a question sentence, the responsible worker tried to search for the appropriate\ninformation from Wikipedia using the search box and/or inner hyperlinks, and constructed\nthe answer paragraphs according to the search results.\nThe whole process of each retrieval is recorded manually by the same worker.\n\nAll sessions are processed from 2023-12-04 to 2023-12-25 with access to Japanese Wikipedia\n(URL\n\nThe dataset consists of following data:\n\n* A question sentence\n* The final answer paragraph (whole sentence, and chunks with citations)\n* List of references with either extracted paragraph or summarization from a Wikipedia\narticle",
"## Target situation and limitation\n\nWe designed this dataset to ensure that the answers reflect only exact information written in the cited references,\nand does not reflect any external information and/or implicit knowledges.\nThis design is useful to measure/investigate QA tasks with accurate retrieval from the given data source.\nPlease keep in mind that the dataset is not designed to provide a QA with correct information.\n\nWe requested the workers strictly to answer the questions based on only explicit citation from Wikipedia.\nThat means, the workers should write answers that may be different from their implicit knowledge,\nand should leave the answer empty if they couldn't find any information from Wikipedia\neven if they know something to answer the questions.",
"# Dataset chunks\n\nAs well as successful sessions with answer paragraphs, we also recorded failed sessions:\nthe worker failed to construct the answer from the search results.\nIn this case we recorded at least the retrieval process despite lack of the answer.\n\nWe release this version of the dataset with the following dataset chunks:\n\n* \"answered\" chunk (838 examples): question, answer, and retrieval process\n* \"not_answered\" chunk (433 examples): question and retrieval process (no answer)",
"## Data structure\n\nEach entry has the following schema:\n\n\n\nExample (\"answered\" data ID=1):",
"## Corresponding author\n\n* Yusuke Oda (@odashi on GitHub)"
] | [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Japanese #license-apache-2.0 #region-us \n",
"# Japanese Wikipedia Human Retrieval dataset\n\nThis is a Japanese question answereing dataset with retrieval on Wikipedia articles\nby trained human workers.",
"## About the dataset\n\nEach entry represents a single QA session:\ngiven a question sentence, the responsible worker tried to search for the appropriate\ninformation from Wikipedia using the search box and/or inner hyperlinks, and constructed\nthe answer paragraphs according to the search results.\nThe whole process of each retrieval is recorded manually by the same worker.\n\nAll sessions are processed from 2023-12-04 to 2023-12-25 with access to Japanese Wikipedia\n(URL\n\nThe dataset consists of following data:\n\n* A question sentence\n* The final answer paragraph (whole sentence, and chunks with citations)\n* List of references with either extracted paragraph or summarization from a Wikipedia\narticle",
"## Target situation and limitation\n\nWe designed this dataset to ensure that the answers reflect only exact information written in the cited references,\nand does not reflect any external information and/or implicit knowledges.\nThis design is useful to measure/investigate QA tasks with accurate retrieval from the given data source.\nPlease keep in mind that the dataset is not designed to provide a QA with correct information.\n\nWe requested the workers strictly to answer the questions based on only explicit citation from Wikipedia.\nThat means, the workers should write answers that may be different from their implicit knowledge,\nand should leave the answer empty if they couldn't find any information from Wikipedia\neven if they know something to answer the questions.",
"# Dataset chunks\n\nAs well as successful sessions with answer paragraphs, we also recorded failed sessions:\nthe worker failed to construct the answer from the search results.\nIn this case we recorded at least the retrieval process despite lack of the answer.\n\nWe release this version of the dataset with the following dataset chunks:\n\n* \"answered\" chunk (838 examples): question, answer, and retrieval process\n* \"not_answered\" chunk (433 examples): question and retrieval process (no answer)",
"## Data structure\n\nEach entry has the following schema:\n\n\n\nExample (\"answered\" data ID=1):",
"## Corresponding author\n\n* Yusuke Oda (@odashi on GitHub)"
] | [
44,
32,
145,
151,
118,
21,
17
] | [
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Japanese #license-apache-2.0 #region-us \n# Japanese Wikipedia Human Retrieval dataset\n\nThis is a Japanese question answereing dataset with retrieval on Wikipedia articles\nby trained human workers.## About the dataset\n\nEach entry represents a single QA session:\ngiven a question sentence, the responsible worker tried to search for the appropriate\ninformation from Wikipedia using the search box and/or inner hyperlinks, and constructed\nthe answer paragraphs according to the search results.\nThe whole process of each retrieval is recorded manually by the same worker.\n\nAll sessions are processed from 2023-12-04 to 2023-12-25 with access to Japanese Wikipedia\n(URL\n\nThe dataset consists of following data:\n\n* A question sentence\n* The final answer paragraph (whole sentence, and chunks with citations)\n* List of references with either extracted paragraph or summarization from a Wikipedia\narticle## Target situation and limitation\n\nWe designed this dataset to ensure that the answers reflect only exact information written in the cited references,\nand does not reflect any external information and/or implicit knowledges.\nThis design is useful to measure/investigate QA tasks with accurate retrieval from the given data source.\nPlease keep in mind that the dataset is not designed to provide a QA with correct information.\n\nWe requested the workers strictly to answer the questions based on only explicit citation from Wikipedia.\nThat means, the workers should write answers that may be different from their implicit knowledge,\nand should leave the answer empty if they couldn't find any information from Wikipedia\neven if they know something to answer the questions.# Dataset chunks\n\nAs well as successful sessions with answer paragraphs, we also recorded failed sessions:\nthe worker failed to construct the answer from the search results.\nIn this case we recorded at least the retrieval process despite lack of the answer.\n\nWe release this version of the dataset with the following dataset chunks:\n\n* \"answered\" chunk (838 examples): question, answer, and retrieval process\n* \"not_answered\" chunk (433 examples): question and retrieval process (no answer)"
] |
0c4c6da19abfab58280d186dbb2dfc8e67aa1972 |
# Synthetic Search Queries
This is generated with GPT-4 Turbo synthetic search queries, that based on [the given filters schema](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-filters-raw) for the given business/service categories:
```
Educational Institutions, Job Recruitment Agencies, Banking Services, Investment Services, Insurance Services, Financial Planning and Advisory, Credit Services, Payment Processing, Mortgage and Real Estate Services, Taxation Services, Risk Management and Compliance, Digital and Mobile Banking, Retail Stores (Online and Offline), Automotive Dealerships, Restaurants and Food Delivery Services, Entertainment and Media Platforms, Government Services, Travelers and Consumers, Logistics and Supply Chain Management, Customer Support Services, Market Research Firms, Mobile App Development, Game Development, Cloud Computing Services, Data Analytics and Business Intelligence, Cybersecurity Software, User Interface/User Experience Design, Internet of Things (IoT) Development, Project Management Tools, Version Control Systems, Continuous Integration/Continuous Deployment, Issue Tracking and Bug Reporting, Collaborative Development Environments, Team Communication and Chat Tools, Task and Time Management, Customer Support and Feedback, Cloud-based Development Environments, Image Stock Platforms, Video Hosting and Portals, Social Networks, Professional Social Networks, Dating Apps, Telecommunication Companies, Legal Services Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing
```
## Column descriptions
* Query (type: str) - generated search query.
* category (type: str) - name of related business / service category.
* Parsed (type: List[str]) - list of JSON readable parsed values:
* Name (type: str) - a name of representation from provided filters schema.
* Type (type: str) - python-like types.
* Value (type: Union[str, float, int]) - parsed value itself, can be not exaclty present in a given query if related filter is an enumeration.
## Generation strategy
We used synthetically generated query parsing instructions:
* We generated lists of possible filters for 63 customer categories:
* [Raw version of filters dataset](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-filters-raw)
* [Split by representations](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-filters)
* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.
* For a given category and combination we [generated](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-queries) with GPT-4 Turbo:
* 2 search queries and theirs parsed version with unstructured parts.
* 2 search queries and theirs parsed version without unstructured part.
* Using filters, queries and parsed version we prepared [72.5k falcon format instruction](EmbeddingStudio/query-parsing-instructions-falcon)
**Warning:** EmbeddingStudio team aware you that generated queries **weren't enough curated**, and will be curated later once we finish our product market fit stage
We also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were:
* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter
* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.
* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern
## Train / test splitting principles
As we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:
* Ability to work well with unseen domain
* Ability to work well with unseen filters
* Ability to work well with unseen queries
For these purposes we:
1. We put into test split 5 categories, completely separared from train: `Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing`.
2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.
3. Selected 5% of other queries and put it into test.
## How to use it
```python
from datasets import load_dataset
search_queries = load_dataset('EmbeddingStudio/synthetic-search-queries')
``` | EmbeddingStudio/synthetic-search-queries | [
"task_categories:token-classification",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"synthetic",
"search-queries",
"e-commerce",
"online-shops",
"travel-agencies",
"educational-institutions-ai",
"job-recruitment-automation",
"banking-digital-services",
"investment-ai-analysis",
"insurance-tech-innovation",
"financial-advisory-ai",
"credit-services-automation",
"payment-processing-tech",
"mortgage-tech-solutions",
"real-estate-digital-solutions",
"taxation-tech-services",
"risk-management-ai",
"compliance-automation",
"digital-banking-innovation",
"mobile-banking-tech",
"online-retail-tech",
"offline-retail-automation",
"automotive-dealership-tech",
"restaurant-automation-tech",
"food-delivery-ai",
"entertainment-platforms-ai",
"media-platforms-tech",
"government-services-automation",
"travel-tech-innovation",
"consumer-analytics-ai",
"logistics-tech-automation",
"supply-chain-ai",
"customer-support-tech",
"market-research-ai",
"mobile-app-dev-tech",
"game-dev-ai",
"cloud-computing-services",
"data-analytics-ai",
"business-intelligence-ai",
"cybersecurity-software-tech",
"ui-ux-design-ai",
"iot-development-tech",
"project-management-tools-ai",
"version-control-systems-tech",
"ci-cd-automation",
"issue-tracking-ai",
"bug-reporting-automation",
"collaborative-dev-environments",
"team-communication-tech",
"task-time-management-ai",
"customer-feedback-ai",
"cloud-based-dev-tech",
"image-stock-platforms-ai",
"video-hosting-tech",
"social-networks-ai",
"professional-social-networks-ai",
"dating-apps-tech",
"region:us"
] | 2024-01-15T13:54:06+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["token-classification", "text-generation"], "pretty_name": "Synthetic Search Queries", "dataset_info": {"features": [{"name": "Query", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "Parsed", "sequence": "string"}], "splits": [{"name": "train_queries", "num_bytes": 2061432, "num_examples": 10700}, {"name": "test_queries", "num_bytes": 737413, "num_examples": 3608}], "download_size": 741810, "dataset_size": 2798845}, "configs": [{"config_name": "default", "data_files": [{"split": "train_queries", "path": "data/train_queries-*"}, {"split": "test_queries", "path": "data/test_queries-*"}]}], "tags": ["synthetic", "search-queries", "e-commerce", "online-shops", "travel-agencies", "educational-institutions-ai", "job-recruitment-automation", "banking-digital-services", "investment-ai-analysis", "insurance-tech-innovation", "financial-advisory-ai", "credit-services-automation", "payment-processing-tech", "mortgage-tech-solutions", "real-estate-digital-solutions", "taxation-tech-services", "risk-management-ai", "compliance-automation", "digital-banking-innovation", "mobile-banking-tech", "online-retail-tech", "offline-retail-automation", "automotive-dealership-tech", "restaurant-automation-tech", "food-delivery-ai", "entertainment-platforms-ai", "media-platforms-tech", "government-services-automation", "travel-tech-innovation", "consumer-analytics-ai", "logistics-tech-automation", "supply-chain-ai", "customer-support-tech", "market-research-ai", "mobile-app-dev-tech", "game-dev-ai", "cloud-computing-services", "data-analytics-ai", "business-intelligence-ai", "cybersecurity-software-tech", "ui-ux-design-ai", "iot-development-tech", "project-management-tools-ai", "version-control-systems-tech", "ci-cd-automation", "issue-tracking-ai", "bug-reporting-automation", "collaborative-dev-environments", "team-communication-tech", "task-time-management-ai", "customer-feedback-ai", "cloud-based-dev-tech", "image-stock-platforms-ai", "video-hosting-tech", "social-networks-ai", "professional-social-networks-ai", "dating-apps-tech"]} | 2024-02-01T11:45:56+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #synthetic #search-queries #e-commerce #online-shops #travel-agencies #educational-institutions-ai #job-recruitment-automation #banking-digital-services #investment-ai-analysis #insurance-tech-innovation #financial-advisory-ai #credit-services-automation #payment-processing-tech #mortgage-tech-solutions #real-estate-digital-solutions #taxation-tech-services #risk-management-ai #compliance-automation #digital-banking-innovation #mobile-banking-tech #online-retail-tech #offline-retail-automation #automotive-dealership-tech #restaurant-automation-tech #food-delivery-ai #entertainment-platforms-ai #media-platforms-tech #government-services-automation #travel-tech-innovation #consumer-analytics-ai #logistics-tech-automation #supply-chain-ai #customer-support-tech #market-research-ai #mobile-app-dev-tech #game-dev-ai #cloud-computing-services #data-analytics-ai #business-intelligence-ai #cybersecurity-software-tech #ui-ux-design-ai #iot-development-tech #project-management-tools-ai #version-control-systems-tech #ci-cd-automation #issue-tracking-ai #bug-reporting-automation #collaborative-dev-environments #team-communication-tech #task-time-management-ai #customer-feedback-ai #cloud-based-dev-tech #image-stock-platforms-ai #video-hosting-tech #social-networks-ai #professional-social-networks-ai #dating-apps-tech #region-us
|
# Synthetic Search Queries
This is generated with GPT-4 Turbo synthetic search queries, that based on the given filters schema for the given business/service categories:
## Column descriptions
* Query (type: str) - generated search query.
* category (type: str) - name of related business / service category.
* Parsed (type: List[str]) - list of JSON readable parsed values:
* Name (type: str) - a name of representation from provided filters schema.
* Type (type: str) - python-like types.
* Value (type: Union[str, float, int]) - parsed value itself, can be not exaclty present in a given query if related filter is an enumeration.
## Generation strategy
We used synthetically generated query parsing instructions:
* We generated lists of possible filters for 63 customer categories:
* Raw version of filters dataset
* Split by representations
* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.
* For a given category and combination we generated with GPT-4 Turbo:
* 2 search queries and theirs parsed version with unstructured parts.
* 2 search queries and theirs parsed version without unstructured part.
* Using filters, queries and parsed version we prepared 72.5k falcon format instruction
Warning: EmbeddingStudio team aware you that generated queries weren't enough curated, and will be curated later once we finish our product market fit stage
We also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were:
* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter
* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.
* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern
## Train / test splitting principles
As we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:
* Ability to work well with unseen domain
* Ability to work well with unseen filters
* Ability to work well with unseen queries
For these purposes we:
1. We put into test split 5 categories, completely separared from train: 'Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing'.
2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.
3. Selected 5% of other queries and put it into test.
## How to use it
| [
"# Synthetic Search Queries\n\nThis is generated with GPT-4 Turbo synthetic search queries, that based on the given filters schema for the given business/service categories:",
"## Column descriptions\n\n* Query (type: str) - generated search query.\n* category (type: str) - name of related business / service category.\n* Parsed (type: List[str]) - list of JSON readable parsed values:\n * Name (type: str) - a name of representation from provided filters schema.\n * Type (type: str) - python-like types.\n * Value (type: Union[str, float, int]) - parsed value itself, can be not exaclty present in a given query if related filter is an enumeration.",
"## Generation strategy\n\nWe used synthetically generated query parsing instructions:\n* We generated lists of possible filters for 63 customer categories: \n * Raw version of filters dataset\n * Split by representations\n* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.\n* For a given category and combination we generated with GPT-4 Turbo:\n * 2 search queries and theirs parsed version with unstructured parts.\n * 2 search queries and theirs parsed version without unstructured part. \n* Using filters, queries and parsed version we prepared 72.5k falcon format instruction\n\nWarning: EmbeddingStudio team aware you that generated queries weren't enough curated, and will be curated later once we finish our product market fit stage\n\nWe also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were: \n* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter \n* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.\n* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern",
"## Train / test splitting principles\n\nAs we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:\n* Ability to work well with unseen domain\n* Ability to work well with unseen filters\n* Ability to work well with unseen queries\n\nFor these purposes we:\n1. We put into test split 5 categories, completely separared from train: 'Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing'.\n2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.\n3. Selected 5% of other queries and put it into test.",
"## How to use it"
] | [
"TAGS\n#task_categories-token-classification #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #synthetic #search-queries #e-commerce #online-shops #travel-agencies #educational-institutions-ai #job-recruitment-automation #banking-digital-services #investment-ai-analysis #insurance-tech-innovation #financial-advisory-ai #credit-services-automation #payment-processing-tech #mortgage-tech-solutions #real-estate-digital-solutions #taxation-tech-services #risk-management-ai #compliance-automation #digital-banking-innovation #mobile-banking-tech #online-retail-tech #offline-retail-automation #automotive-dealership-tech #restaurant-automation-tech #food-delivery-ai #entertainment-platforms-ai #media-platforms-tech #government-services-automation #travel-tech-innovation #consumer-analytics-ai #logistics-tech-automation #supply-chain-ai #customer-support-tech #market-research-ai #mobile-app-dev-tech #game-dev-ai #cloud-computing-services #data-analytics-ai #business-intelligence-ai #cybersecurity-software-tech #ui-ux-design-ai #iot-development-tech #project-management-tools-ai #version-control-systems-tech #ci-cd-automation #issue-tracking-ai #bug-reporting-automation #collaborative-dev-environments #team-communication-tech #task-time-management-ai #customer-feedback-ai #cloud-based-dev-tech #image-stock-platforms-ai #video-hosting-tech #social-networks-ai #professional-social-networks-ai #dating-apps-tech #region-us \n",
"# Synthetic Search Queries\n\nThis is generated with GPT-4 Turbo synthetic search queries, that based on the given filters schema for the given business/service categories:",
"## Column descriptions\n\n* Query (type: str) - generated search query.\n* category (type: str) - name of related business / service category.\n* Parsed (type: List[str]) - list of JSON readable parsed values:\n * Name (type: str) - a name of representation from provided filters schema.\n * Type (type: str) - python-like types.\n * Value (type: Union[str, float, int]) - parsed value itself, can be not exaclty present in a given query if related filter is an enumeration.",
"## Generation strategy\n\nWe used synthetically generated query parsing instructions:\n* We generated lists of possible filters for 63 customer categories: \n * Raw version of filters dataset\n * Split by representations\n* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.\n* For a given category and combination we generated with GPT-4 Turbo:\n * 2 search queries and theirs parsed version with unstructured parts.\n * 2 search queries and theirs parsed version without unstructured part. \n* Using filters, queries and parsed version we prepared 72.5k falcon format instruction\n\nWarning: EmbeddingStudio team aware you that generated queries weren't enough curated, and will be curated later once we finish our product market fit stage\n\nWe also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were: \n* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter \n* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.\n* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern",
"## Train / test splitting principles\n\nAs we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:\n* Ability to work well with unseen domain\n* Ability to work well with unseen filters\n* Ability to work well with unseen queries\n\nFor these purposes we:\n1. We put into test split 5 categories, completely separared from train: 'Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing'.\n2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.\n3. Selected 5% of other queries and put it into test.",
"## How to use it"
] | [
487,
40,
133,
290,
159,
5
] | [
"passage: TAGS\n#task_categories-token-classification #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #synthetic #search-queries #e-commerce #online-shops #travel-agencies #educational-institutions-ai #job-recruitment-automation #banking-digital-services #investment-ai-analysis #insurance-tech-innovation #financial-advisory-ai #credit-services-automation #payment-processing-tech #mortgage-tech-solutions #real-estate-digital-solutions #taxation-tech-services #risk-management-ai #compliance-automation #digital-banking-innovation #mobile-banking-tech #online-retail-tech #offline-retail-automation #automotive-dealership-tech #restaurant-automation-tech #food-delivery-ai #entertainment-platforms-ai #media-platforms-tech #government-services-automation #travel-tech-innovation #consumer-analytics-ai #logistics-tech-automation #supply-chain-ai #customer-support-tech #market-research-ai #mobile-app-dev-tech #game-dev-ai #cloud-computing-services #data-analytics-ai #business-intelligence-ai #cybersecurity-software-tech #ui-ux-design-ai #iot-development-tech #project-management-tools-ai #version-control-systems-tech #ci-cd-automation #issue-tracking-ai #bug-reporting-automation #collaborative-dev-environments #team-communication-tech #task-time-management-ai #customer-feedback-ai #cloud-based-dev-tech #image-stock-platforms-ai #video-hosting-tech #social-networks-ai #professional-social-networks-ai #dating-apps-tech #region-us \n",
"passage: # Synthetic Search Queries\n\nThis is generated with GPT-4 Turbo synthetic search queries, that based on the given filters schema for the given business/service categories:## Column descriptions\n\n* Query (type: str) - generated search query.\n* category (type: str) - name of related business / service category.\n* Parsed (type: List[str]) - list of JSON readable parsed values:\n * Name (type: str) - a name of representation from provided filters schema.\n * Type (type: str) - python-like types.\n * Value (type: Union[str, float, int]) - parsed value itself, can be not exaclty present in a given query if related filter is an enumeration.## Generation strategy\n\nWe used synthetically generated query parsing instructions:\n* We generated lists of possible filters for 63 customer categories: \n * Raw version of filters dataset\n * Split by representations\n* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.\n* For a given category and combination we generated with GPT-4 Turbo:\n * 2 search queries and theirs parsed version with unstructured parts.\n * 2 search queries and theirs parsed version without unstructured part. \n* Using filters, queries and parsed version we prepared 72.5k falcon format instruction\n\nWarning: EmbeddingStudio team aware you that generated queries weren't enough curated, and will be curated later once we finish our product market fit stage\n\nWe also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were: \n* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter \n* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.\n* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern"
] |
a9249c5d502b8ee93d6b4c260e76ff52c7443d88 | # Baidu ULTR Dataset - Tencent BERT-12l-12h
Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank](https://arxiv.org/abs/2207.03051) dataset.
This dataset uses the pretrained [BERT cross-encoder (Bert_Layer12_Head12) from Tencent](https://github.com/lixsh6/Tencent_wsdm_cup2023/tree/main/pytorch_unbias) published as part of the WSDM cup 2023 to compute query-document vectors (768 dims).
## Setup
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
4. You can now use the dataset as described below.
## Load train / test click dataset:
```Python
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_tencent-mlm-ctr",
name="clicks",
split="train", # ["train", "test"]
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```
## Load expert annotations:
```Python
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_tencent-mlm-ctr",
name="annotations",
split="test",
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```
## Available features
Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
### Click dataset
| name | dtype | description |
|------------------------------|----------------|-------------|
| query_id | string | Baidu query_id |
| query_md5 | string | MD5 hash of query text |
| url_md5 | List[string] | MD5 hash of document url, most reliable document identifier |
| text_md5 | List[string] | MD5 hash of document title and abstract |
| query_document_embedding | Tensor[float16]| BERT CLS token |
| click | Tensor[int32] | Click / no click on a document |
| n | int32 | Number of documents for current query, useful for padding |
| position | Tensor[int32] | Position in ranking (does not always match original item position) |
| media_type | Tensor[int32] | Document type (label encoding recommended as ids do not occupy a continous integer range) |
| displayed_time | Tensor[float32]| Seconds a document was displayed on screen |
| serp_height | Tensor[int32] | Pixel height of a document on screen |
| slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off screen after previously clicking on it |
### Expert annotation dataset
| name | dtype | description |
|------------------------------|----------------|-------------|
| query_id | string | Baidu query_id |
| query_md5 | string | MD5 hash of query text |
| text_md5 | List[string] | MD5 hash of document title and abstract |
| query_document_embedding | Tensor[float16]| BERT CLS token |
| label | Tensor[int32] | Relevance judgment on a scale from 0 (bad) to 4 (excellent) |
| n | int32 | Number of documents for current query, useful for padding |
| frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
## Example PyTorch collate function
Each sample in the dataset is a single query with multiple documents.
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
```Python
import torch
from typing import List
from collections import defaultdict
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader
def collate_clicks(samples: List):
batch = defaultdict(lambda: [])
for sample in samples:
batch["query_document_embedding"].append(sample["query_document_embedding"])
batch["position"].append(sample["position"])
batch["click"].append(sample["click"])
batch["n"].append(sample["n"])
return {
"query_document_embedding": pad_sequence(
batch["query_document_embedding"], batch_first=True
),
"position": pad_sequence(batch["position"], batch_first=True),
"click": pad_sequence(batch["click"], batch_first=True),
"n": torch.tensor(batch["n"]),
}
loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
```
| philipphager/baidu-ultr_tencent-mlm-ctr | [
"license:cc-by-nc-4.0",
"arxiv:2207.03051",
"region:us"
] | 2024-01-15T13:57:12+00:00 | {"license": "cc-by-nc-4.0", "viewer": false} | 2024-01-21T14:28:19+00:00 | [
"2207.03051"
] | [] | TAGS
#license-cc-by-nc-4.0 #arxiv-2207.03051 #region-us
| Baidu ULTR Dataset - Tencent BERT-12l-12h
=========================================
Query-document vectors and clicks for a subset of the Baidu Unbiased Learning to Rank dataset.
This dataset uses the pretrained BERT cross-encoder (Bert\_Layer12\_Head12) from Tencent published as part of the WSDM cup 2023 to compute query-document vectors (768 dims).
Setup
-----
1. Install huggingface datasets
2. Install pandas and pyarrow: 'pip install pandas pyarrow'
3. Optionally, you might need to install a pyarrow-hotfix if you cannot install 'pyarrow >= 14.0.1'
4. You can now use the dataset as described below.
Load train / test click dataset:
--------------------------------
Load expert annotations:
------------------------
Available features
------------------
Each row of the click / annotation dataset contains the following attributes. Use a custom 'collate\_fn' to select specific features (see below):
### Click dataset
name: query\_id, dtype: string, description: Baidu query\_id
name: query\_md5, dtype: string, description: MD5 hash of query text
name: url\_md5, dtype: List[string], description: MD5 hash of document url, most reliable document identifier
name: text\_md5, dtype: List[string], description: MD5 hash of document title and abstract
name: query\_document\_embedding, dtype: Tensor[float16], description: BERT CLS token
name: click, dtype: Tensor[int32], description: Click / no click on a document
name: n, dtype: int32, description: Number of documents for current query, useful for padding
name: position, dtype: Tensor[int32], description: Position in ranking (does not always match original item position)
name: media\_type, dtype: Tensor[int32], description: Document type (label encoding recommended as ids do not occupy a continous integer range)
name: displayed\_time, dtype: Tensor[float32], description: Seconds a document was displayed on screen
name: serp\_height, dtype: Tensor[int32], description: Pixel height of a document on screen
name: slipoff\_count\_after\_click, dtype: Tensor[int32], description: Number of times a document was scrolled off screen after previously clicking on it
### Expert annotation dataset
name: query\_id, dtype: string, description: Baidu query\_id
name: query\_md5, dtype: string, description: MD5 hash of query text
name: text\_md5, dtype: List[string], description: MD5 hash of document title and abstract
name: query\_document\_embedding, dtype: Tensor[float16], description: BERT CLS token
name: label, dtype: Tensor[int32], description: Relevance judgment on a scale from 0 (bad) to 4 (excellent)
name: n, dtype: int32, description: Number of documents for current query, useful for padding
name: frequency\_bucket, dtype: int32, description: Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency)
Example PyTorch collate function
--------------------------------
Each sample in the dataset is a single query with multiple documents.
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
| [
"### Click dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: url\\_md5, dtype: List[string], description: MD5 hash of document url, most reliable document identifier\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: query\\_document\\_embedding, dtype: Tensor[float16], description: BERT CLS token\nname: click, dtype: Tensor[int32], description: Click / no click on a document\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: position, dtype: Tensor[int32], description: Position in ranking (does not always match original item position)\nname: media\\_type, dtype: Tensor[int32], description: Document type (label encoding recommended as ids do not occupy a continous integer range)\nname: displayed\\_time, dtype: Tensor[float32], description: Seconds a document was displayed on screen\nname: serp\\_height, dtype: Tensor[int32], description: Pixel height of a document on screen\nname: slipoff\\_count\\_after\\_click, dtype: Tensor[int32], description: Number of times a document was scrolled off screen after previously clicking on it",
"### Expert annotation dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: query\\_document\\_embedding, dtype: Tensor[float16], description: BERT CLS token\nname: label, dtype: Tensor[int32], description: Relevance judgment on a scale from 0 (bad) to 4 (excellent)\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: frequency\\_bucket, dtype: int32, description: Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency)\n\n\nExample PyTorch collate function\n--------------------------------\n\n\nEach sample in the dataset is a single query with multiple documents.\nThe following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:"
] | [
"TAGS\n#license-cc-by-nc-4.0 #arxiv-2207.03051 #region-us \n",
"### Click dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: url\\_md5, dtype: List[string], description: MD5 hash of document url, most reliable document identifier\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: query\\_document\\_embedding, dtype: Tensor[float16], description: BERT CLS token\nname: click, dtype: Tensor[int32], description: Click / no click on a document\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: position, dtype: Tensor[int32], description: Position in ranking (does not always match original item position)\nname: media\\_type, dtype: Tensor[int32], description: Document type (label encoding recommended as ids do not occupy a continous integer range)\nname: displayed\\_time, dtype: Tensor[float32], description: Seconds a document was displayed on screen\nname: serp\\_height, dtype: Tensor[int32], description: Pixel height of a document on screen\nname: slipoff\\_count\\_after\\_click, dtype: Tensor[int32], description: Number of times a document was scrolled off screen after previously clicking on it",
"### Expert annotation dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: query\\_document\\_embedding, dtype: Tensor[float16], description: BERT CLS token\nname: label, dtype: Tensor[int32], description: Relevance judgment on a scale from 0 (bad) to 4 (excellent)\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: frequency\\_bucket, dtype: int32, description: Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency)\n\n\nExample PyTorch collate function\n--------------------------------\n\n\nEach sample in the dataset is a single query with multiple documents.\nThe following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:"
] | [
25,
363,
273
] | [
"passage: TAGS\n#license-cc-by-nc-4.0 #arxiv-2207.03051 #region-us \n### Click dataset\n\n\nname: query\\_id, dtype: string, description: Baidu query\\_id\nname: query\\_md5, dtype: string, description: MD5 hash of query text\nname: url\\_md5, dtype: List[string], description: MD5 hash of document url, most reliable document identifier\nname: text\\_md5, dtype: List[string], description: MD5 hash of document title and abstract\nname: query\\_document\\_embedding, dtype: Tensor[float16], description: BERT CLS token\nname: click, dtype: Tensor[int32], description: Click / no click on a document\nname: n, dtype: int32, description: Number of documents for current query, useful for padding\nname: position, dtype: Tensor[int32], description: Position in ranking (does not always match original item position)\nname: media\\_type, dtype: Tensor[int32], description: Document type (label encoding recommended as ids do not occupy a continous integer range)\nname: displayed\\_time, dtype: Tensor[float32], description: Seconds a document was displayed on screen\nname: serp\\_height, dtype: Tensor[int32], description: Pixel height of a document on screen\nname: slipoff\\_count\\_after\\_click, dtype: Tensor[int32], description: Number of times a document was scrolled off screen after previously clicking on it"
] |
bf8ca8ddc5b7da4f7004720ddf99bbe0483480e6 | ERROR: type should be string, got "\n\nhttps://www.kaggle.com/datasets/kevinmorgado/spanish-news-classification" | jinaai/spanish_news_clustering | [
"region:eu"
] | 2024-01-15T13:59:22+00:00 | {"dataset_info": {"features": [{"name": "sentences", "sequence": "string"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 3985186, "num_examples": 1}], "download_size": 2194770, "dataset_size": 3985186}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2024-01-17T08:06:52+00:00 | [] | [] | TAGS
#region-eu
|
URL | [] | [
"TAGS\n#region-eu \n"
] | [
6
] | [
"passage: TAGS\n#region-eu \n"
] |
628c6361566794e87db9931abeb8d4a1046af5a5 |
The year, volume, part and page fields are taken from the xml metadata and they refer to the catalog publication. Each entry contains a registration number (`regNum`) and a registration date (`regDate`) which were copied verbatim but the xml entry also standardized those to the `regnum` and `date` attributes. | baber/NYPL_entries | [
"license:cc0-1.0",
"region:us"
] | 2024-01-15T14:17:33+00:00 | {"license": "cc0-1.0"} | 2024-01-15T22:16:59+00:00 | [] | [] | TAGS
#license-cc0-1.0 #region-us
|
The year, volume, part and page fields are taken from the xml metadata and they refer to the catalog publication. Each entry contains a registration number ('regNum') and a registration date ('regDate') which were copied verbatim but the xml entry also standardized those to the 'regnum' and 'date' attributes. | [] | [
"TAGS\n#license-cc0-1.0 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#license-cc0-1.0 #region-us \n"
] |
b01404bc9befa32ec61363ecf1916ffe823a9a0d |
# Economicus dataset
## Introduction
Economicus is a synthetic dataset of graduate-level economics questions and answers generated by Mixtral 8x7B. This is a single turn dataset of almost 13 million tokens.
## Dataset structure
```
{
"id": 27,
"source_id": "modern_economic_growth",
"method": "grounded_qa_generation",
"conversations": [
{
"from": "human",
"value": "How do institutional differences affect economic growth?"
},
{
"from": "gpt",
"value": "Institutional differences affect economic growth by shaping the incentives individuals and firms have for accumulation, investment, and trade. Institutions can either promote or hinder economic growth, depending on the laws and regulations under which individuals and firms function."
}
]
}
```
## Creation process
Economicus q-a pairs are based on text extracts from these sources:
* Economic Growth 2nd ed (Barro & Sala-i-Martin)
* Recursive Macroeconomic Theory (Sargent & Ljungqvist)
* Advanced International Trade: Theory and Evidence (Feenstra)
* Advanced Macroeconomics 5th ed (Romer)
* Microeconomic Foundations: Choice and Competitive Markets (Kreps)
* Mostly Harmless Econometrics: An Empiricist's Companion (Angrist & Pischke)
* Microeconomic Theory (Mas-Colell, Whinston & Green)
* Introduction to Modern Economic Growth (Acemoglu)
* Econometric Analysis of Cross Section and Panel Data (Wooldridge)
* Econometrics (Hayashi)
* The Economics of Growth (Aghion & Howitt)
* Interest and Prices (Woodford)
* Labor Markets and Business Cycles (Shimer)
* Monetary Theory and Policy (Walsh)
* Open Economy Macroeconomics (Uribe & Schmitt-Grohé)
* Mathematical Methods and Models for Economists (de la Fuente)
* A Course in Game Theory (Osborne & Rubinstein)
* A First Course in Optimization Theory (Sundaram)
* Lectures on Macroeconomics (Blanchard & Fischer)
* Fundamental Methods in Mathematical Economics (Chiang & Wainwright)
* Dynamic Economics: Quantitative Methods and Applications (Adda & Cooper)
They were parsed using [marker](https://github.com/VikParuchuri/marker), which creates nicely formatted sections. These sections were joined to form groups of minimum 2048 tokens (minus a 15% tolerance).
Each of these groups was used in the following prompt.
```
You are building a compendium of statements or questions for Economics PhD students to solve that will be used in tests and exams.
Generate up to {n_questions} diverse questions. Use the book extract provided at the end of this prompt as a reference.
Make questions appropiate for graduate-level students. Be varied with question formats.
Students answering these questions will not have access to the book that contains the extract, so do not mention anything like page numbers, section numbers or titles, chapter numbers or titles, equation numbers, theorem numbers, proposition numbers or exercise numbers.
Conform to this JSON schema:
[{{"input": "a statement or question"}}, {{"input": "a statement or question"}}, ...]
You can only output valid JSON. The only valid key is "input".
### Extract (from {title} by {author}):
{extract}
```
Then, each question is sent to the model along with the extract from where it was generated:
```
Below is a statement or question for an economics PhD student. Please provide a detailed and complete answer to the question. The answer should be long and elaborate, and should include as much information as possible relating to the input, including your own knowledge. Use LaTeX notation for equations and symbols.
Do not mention anything specific to the extract. Do not talk about the extract. Do not mention anything like page numbers, section numbers or titles, chapter numbers or titles, equation numbers, theorem numbers, proposition numbers or exercise numbers.
The question is based on the following extract from the book {title} by {author}:
### Extract:
{extract}
### Question:
{question}
```
The dataset is preprocessed to remove as many specific references to the extract as possible ("Explain figure 7.1" for example), because my prompt-fu is not great. Also, instances of `"the text"` (like "Explain the model A in the text.") instances are replaced by `"{title} by {author}"`. | rxavier/economicus | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"economics",
"economy",
"sharegpt",
"region:us"
] | 2024-01-15T14:17:54+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "question-answering"], "pretty_name": "economicus", "tags": ["economics", "economy", "sharegpt"]} | 2024-02-12T05:35:07+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-mit #economics #economy #sharegpt #region-us
|
# Economicus dataset
## Introduction
Economicus is a synthetic dataset of graduate-level economics questions and answers generated by Mixtral 8x7B. This is a single turn dataset of almost 13 million tokens.
## Dataset structure
## Creation process
Economicus q-a pairs are based on text extracts from these sources:
* Economic Growth 2nd ed (Barro & Sala-i-Martin)
* Recursive Macroeconomic Theory (Sargent & Ljungqvist)
* Advanced International Trade: Theory and Evidence (Feenstra)
* Advanced Macroeconomics 5th ed (Romer)
* Microeconomic Foundations: Choice and Competitive Markets (Kreps)
* Mostly Harmless Econometrics: An Empiricist's Companion (Angrist & Pischke)
* Microeconomic Theory (Mas-Colell, Whinston & Green)
* Introduction to Modern Economic Growth (Acemoglu)
* Econometric Analysis of Cross Section and Panel Data (Wooldridge)
* Econometrics (Hayashi)
* The Economics of Growth (Aghion & Howitt)
* Interest and Prices (Woodford)
* Labor Markets and Business Cycles (Shimer)
* Monetary Theory and Policy (Walsh)
* Open Economy Macroeconomics (Uribe & Schmitt-Grohé)
* Mathematical Methods and Models for Economists (de la Fuente)
* A Course in Game Theory (Osborne & Rubinstein)
* A First Course in Optimization Theory (Sundaram)
* Lectures on Macroeconomics (Blanchard & Fischer)
* Fundamental Methods in Mathematical Economics (Chiang & Wainwright)
* Dynamic Economics: Quantitative Methods and Applications (Adda & Cooper)
They were parsed using marker, which creates nicely formatted sections. These sections were joined to form groups of minimum 2048 tokens (minus a 15% tolerance).
Each of these groups was used in the following prompt.
Then, each question is sent to the model along with the extract from where it was generated:
The dataset is preprocessed to remove as many specific references to the extract as possible ("Explain figure 7.1" for example), because my prompt-fu is not great. Also, instances of '"the text"' (like "Explain the model A in the text.") instances are replaced by '"{title} by {author}"'. | [
"# Economicus dataset",
"## Introduction\n\nEconomicus is a synthetic dataset of graduate-level economics questions and answers generated by Mixtral 8x7B. This is a single turn dataset of almost 13 million tokens.",
"## Dataset structure",
"## Creation process\n\nEconomicus q-a pairs are based on text extracts from these sources:\n\n* Economic Growth 2nd ed (Barro & Sala-i-Martin)\n* Recursive Macroeconomic Theory (Sargent & Ljungqvist)\n* Advanced International Trade: Theory and Evidence (Feenstra)\n* Advanced Macroeconomics 5th ed (Romer)\n* Microeconomic Foundations: Choice and Competitive Markets (Kreps)\n* Mostly Harmless Econometrics: An Empiricist's Companion (Angrist & Pischke)\n* Microeconomic Theory (Mas-Colell, Whinston & Green)\n* Introduction to Modern Economic Growth (Acemoglu)\n* Econometric Analysis of Cross Section and Panel Data (Wooldridge)\n* Econometrics (Hayashi)\n* The Economics of Growth (Aghion & Howitt)\n* Interest and Prices (Woodford)\n* Labor Markets and Business Cycles (Shimer)\n* Monetary Theory and Policy (Walsh)\n* Open Economy Macroeconomics (Uribe & Schmitt-Grohé)\n* Mathematical Methods and Models for Economists (de la Fuente)\n* A Course in Game Theory (Osborne & Rubinstein)\n* A First Course in Optimization Theory (Sundaram)\n* Lectures on Macroeconomics (Blanchard & Fischer)\n* Fundamental Methods in Mathematical Economics (Chiang & Wainwright)\n* Dynamic Economics: Quantitative Methods and Applications (Adda & Cooper)\n \nThey were parsed using marker, which creates nicely formatted sections. These sections were joined to form groups of minimum 2048 tokens (minus a 15% tolerance).\n\nEach of these groups was used in the following prompt.\n\n\n\nThen, each question is sent to the model along with the extract from where it was generated:\n\n\n\nThe dataset is preprocessed to remove as many specific references to the extract as possible (\"Explain figure 7.1\" for example), because my prompt-fu is not great. Also, instances of '\"the text\"' (like \"Explain the model A in the text.\") instances are replaced by '\"{title} by {author}\"'."
] | [
"TAGS\n#task_categories-text-generation #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-mit #economics #economy #sharegpt #region-us \n",
"# Economicus dataset",
"## Introduction\n\nEconomicus is a synthetic dataset of graduate-level economics questions and answers generated by Mixtral 8x7B. This is a single turn dataset of almost 13 million tokens.",
"## Dataset structure",
"## Creation process\n\nEconomicus q-a pairs are based on text extracts from these sources:\n\n* Economic Growth 2nd ed (Barro & Sala-i-Martin)\n* Recursive Macroeconomic Theory (Sargent & Ljungqvist)\n* Advanced International Trade: Theory and Evidence (Feenstra)\n* Advanced Macroeconomics 5th ed (Romer)\n* Microeconomic Foundations: Choice and Competitive Markets (Kreps)\n* Mostly Harmless Econometrics: An Empiricist's Companion (Angrist & Pischke)\n* Microeconomic Theory (Mas-Colell, Whinston & Green)\n* Introduction to Modern Economic Growth (Acemoglu)\n* Econometric Analysis of Cross Section and Panel Data (Wooldridge)\n* Econometrics (Hayashi)\n* The Economics of Growth (Aghion & Howitt)\n* Interest and Prices (Woodford)\n* Labor Markets and Business Cycles (Shimer)\n* Monetary Theory and Policy (Walsh)\n* Open Economy Macroeconomics (Uribe & Schmitt-Grohé)\n* Mathematical Methods and Models for Economists (de la Fuente)\n* A Course in Game Theory (Osborne & Rubinstein)\n* A First Course in Optimization Theory (Sundaram)\n* Lectures on Macroeconomics (Blanchard & Fischer)\n* Fundamental Methods in Mathematical Economics (Chiang & Wainwright)\n* Dynamic Economics: Quantitative Methods and Applications (Adda & Cooper)\n \nThey were parsed using marker, which creates nicely formatted sections. These sections were joined to form groups of minimum 2048 tokens (minus a 15% tolerance).\n\nEach of these groups was used in the following prompt.\n\n\n\nThen, each question is sent to the model along with the extract from where it was generated:\n\n\n\nThe dataset is preprocessed to remove as many specific references to the extract as possible (\"Explain figure 7.1\" for example), because my prompt-fu is not great. Also, instances of '\"the text\"' (like \"Explain the model A in the text.\") instances are replaced by '\"{title} by {author}\"'."
] | [
60,
5,
46,
4,
508
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-mit #economics #economy #sharegpt #region-us \n# Economicus dataset## Introduction\n\nEconomicus is a synthetic dataset of graduate-level economics questions and answers generated by Mixtral 8x7B. This is a single turn dataset of almost 13 million tokens.## Dataset structure"
] |
5753fd13b0ac0336ac494ef9c1d88a34a8b51428 |
This dataset offers a french translation of the 12k DPO [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) pairs made from [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca). | ntnq/french_orca_dpo_pairs | [
"size_categories:10K<n<100K",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2024-01-15T14:22:18+00:00 | {"language": ["fr"], "license": "apache-2.0", "size_categories": ["10K<n<100K"]} | 2024-01-25T20:50:34+00:00 | [] | [
"fr"
] | TAGS
#size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us
|
This dataset offers a french translation of the 12k DPO Intel/orca_dpo_pairs pairs made from Open-Orca/OpenOrca. | [] | [
"TAGS\n#size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us \n"
] | [
32
] | [
"passage: TAGS\n#size_categories-10K<n<100K #language-French #license-apache-2.0 #region-us \n"
] |
d396584219056c40bd20069aae2de0e5f6637709 |
The text of all the articles from Logic Magazine issues 1-18, in a prompt/response format, in JSONL. | bentarnoff/logic_magazine_jsonl | [
"region:us"
] | 2024-01-15T14:35:29+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1929516.5, "num_examples": 1058}, {"name": "test", "num_bytes": 1929516.5, "num_examples": 1058}], "download_size": 2521650, "dataset_size": 3859033.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-01-16T15:58:09+00:00 | [] | [] | TAGS
#region-us
|
The text of all the articles from Logic Magazine issues 1-18, in a prompt/response format, in JSONL. | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
d00d7e9a1bf6a005edd85fd27e0fd5a74020d0bf |
# RePro: A Benchmark Dataset for Opinion Mining for Brazilian Portuguese
RePro, which stands for "REview of PROducts," is a benchmark dataset for opinion mining in Brazilian Portuguese. It consists of 10,000 humanly annotated e-commerce product reviews, each labeled with sentiment and topic information. The dataset was created based on data from one of the largest Brazilian e-commerce platforms, which produced the B2W-Reviews01 dataset (https://github.com/americanas-tech/b2w-reviews01). The RePro dataset aims to provide a valuable resource for tasks related to sentiment analysis and topic modeling in the context of Brazilian Portuguese e-commerce product reviews. It is designed to serve as a benchmark for future research in natural language processing and related fields. | lucasnil/repro | [
"license:cc-by-4.0",
"region:us"
] | 2024-01-15T15:03:28+00:00 | {"license": "cc-by-4.0"} | 2024-01-15T15:06:31+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# RePro: A Benchmark Dataset for Opinion Mining for Brazilian Portuguese
RePro, which stands for "REview of PROducts," is a benchmark dataset for opinion mining in Brazilian Portuguese. It consists of 10,000 humanly annotated e-commerce product reviews, each labeled with sentiment and topic information. The dataset was created based on data from one of the largest Brazilian e-commerce platforms, which produced the B2W-Reviews01 dataset (URL The RePro dataset aims to provide a valuable resource for tasks related to sentiment analysis and topic modeling in the context of Brazilian Portuguese e-commerce product reviews. It is designed to serve as a benchmark for future research in natural language processing and related fields. | [
"# RePro: A Benchmark Dataset for Opinion Mining for Brazilian Portuguese\n\nRePro, which stands for \"REview of PROducts,\" is a benchmark dataset for opinion mining in Brazilian Portuguese. It consists of 10,000 humanly annotated e-commerce product reviews, each labeled with sentiment and topic information. The dataset was created based on data from one of the largest Brazilian e-commerce platforms, which produced the B2W-Reviews01 dataset (URL The RePro dataset aims to provide a valuable resource for tasks related to sentiment analysis and topic modeling in the context of Brazilian Portuguese e-commerce product reviews. It is designed to serve as a benchmark for future research in natural language processing and related fields."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# RePro: A Benchmark Dataset for Opinion Mining for Brazilian Portuguese\n\nRePro, which stands for \"REview of PROducts,\" is a benchmark dataset for opinion mining in Brazilian Portuguese. It consists of 10,000 humanly annotated e-commerce product reviews, each labeled with sentiment and topic information. The dataset was created based on data from one of the largest Brazilian e-commerce platforms, which produced the B2W-Reviews01 dataset (URL The RePro dataset aims to provide a valuable resource for tasks related to sentiment analysis and topic modeling in the context of Brazilian Portuguese e-commerce product reviews. It is designed to serve as a benchmark for future research in natural language processing and related fields."
] | [
15,
171
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n# RePro: A Benchmark Dataset for Opinion Mining for Brazilian Portuguese\n\nRePro, which stands for \"REview of PROducts,\" is a benchmark dataset for opinion mining in Brazilian Portuguese. It consists of 10,000 humanly annotated e-commerce product reviews, each labeled with sentiment and topic information. The dataset was created based on data from one of the largest Brazilian e-commerce platforms, which produced the B2W-Reviews01 dataset (URL The RePro dataset aims to provide a valuable resource for tasks related to sentiment analysis and topic modeling in the context of Brazilian Portuguese e-commerce product reviews. It is designed to serve as a benchmark for future research in natural language processing and related fields."
] |
af457b044f09558a8ab18d3c5479e13c9e83f21a |
## Summary
A very small dataset of input recipes and output recipe gantt charts in TSV format where each column represents a method step and each row represents a single ingredient. Cells of the output TSV are populated with X if that ingredient is used in that step.
It was used to fine-tune [pocasrocas/recipe-gantt-v0.1](https://huggingface.co/pocasrocas/recipe-gantt-v0.1).
## Format
It follows the [alpaca](https://github.com/tatsu-lab/stanford_alpaca?tab=readme-ov-file#data-release) instruction/input/response format, shared here in .jsonl format for easy use with libraries such as [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
## Development process
1. Used the [openrecipes](https://github.com/fictivekin/openrecipes) dataset to get a few hundred recipe URLs
1. Used [recipe-scrapers](https://github.com/hhursev/recipe-scrapers) library to extract the ingredients and method steps when given a recipe URL ([code](https://github.com/jbremz/recipe-gantt/blob/1c37b115b155a128e0765040197c5783b5a91ff3/notebooks/001-get-data/02-save-recipes.ipynb)).
1. A custom GPT Assistant was written to generate the desired gantt charts as TSV files (albeit slowly and expensively) from simplified Ingredients, Method formatted recipes ([code](https://github.com/jbremz/recipe-gantt/blob/1c37b115b155a128e0765040197c5783b5a91ff3/notebooks/001-get-data/03-query-gpt4.ipynb)). A publicly accessible GPT version of the same assistant is [here](https://chat.openai.com/g/g-VG5s6fStY-recipe-gantt).
1. Did a small amount of manual tweaking of the outputs to improve data quality before I lost my mind and moved on ([code](https://github.com/jbremz/recipe-gantt/blob/1c37b115b155a128e0765040197c5783b5a91ff3/notebooks/001-get-data/04-check-results.ipynb)).
Full details of dataset creation can be found [here](https://github.com/jbremz/recipe-gantt/tree/3f153a23f5aed15236631e322064d56c737b151c/notebooks/001-get-data).
## Limitations
- **Size:** I stopped at 288 examples because I didn't want to spend any more money on OpenAI credits (~£20). Otherwise, it would be very striaghtforward to scale this dataset.
- **Errors:** being generated by GPT-4 there are errors in the outputs that I found, I only manually checked ~30 examples before deeming that the accuracy was sufficient for my needs.
- You will notice that the Instructions are all identical. I made this decision as the dataset was so small - I was keen to make it as easy as possible for the model to understand the task when finetuning. It is redundant information though and if I had scaled this dataset larger I would have removed the `input` field (as is valid with alpaca) and moved it to the `instruction` field, replacing the boilerplate prompt. | pocasrocas/recipe-gantt | [
"task_categories:text2text-generation",
"language:en",
"license:mit",
"art",
"chemistry",
"food",
"recipes",
"region:us"
] | 2024-01-15T15:09:31+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["text2text-generation"], "tags": ["art", "chemistry", "food", "recipes"]} | 2024-01-15T16:00:55+00:00 | [] | [
"en"
] | TAGS
#task_categories-text2text-generation #language-English #license-mit #art #chemistry #food #recipes #region-us
|
## Summary
A very small dataset of input recipes and output recipe gantt charts in TSV format where each column represents a method step and each row represents a single ingredient. Cells of the output TSV are populated with X if that ingredient is used in that step.
It was used to fine-tune pocasrocas/recipe-gantt-v0.1.
## Format
It follows the alpaca instruction/input/response format, shared here in .jsonl format for easy use with libraries such as axolotl.
## Development process
1. Used the openrecipes dataset to get a few hundred recipe URLs
1. Used recipe-scrapers library to extract the ingredients and method steps when given a recipe URL (code).
1. A custom GPT Assistant was written to generate the desired gantt charts as TSV files (albeit slowly and expensively) from simplified Ingredients, Method formatted recipes (code). A publicly accessible GPT version of the same assistant is here.
1. Did a small amount of manual tweaking of the outputs to improve data quality before I lost my mind and moved on (code).
Full details of dataset creation can be found here.
## Limitations
- Size: I stopped at 288 examples because I didn't want to spend any more money on OpenAI credits (~£20). Otherwise, it would be very striaghtforward to scale this dataset.
- Errors: being generated by GPT-4 there are errors in the outputs that I found, I only manually checked ~30 examples before deeming that the accuracy was sufficient for my needs.
- You will notice that the Instructions are all identical. I made this decision as the dataset was so small - I was keen to make it as easy as possible for the model to understand the task when finetuning. It is redundant information though and if I had scaled this dataset larger I would have removed the 'input' field (as is valid with alpaca) and moved it to the 'instruction' field, replacing the boilerplate prompt. | [
"## Summary\n\nA very small dataset of input recipes and output recipe gantt charts in TSV format where each column represents a method step and each row represents a single ingredient. Cells of the output TSV are populated with X if that ingredient is used in that step.\n\nIt was used to fine-tune pocasrocas/recipe-gantt-v0.1.",
"## Format\n\nIt follows the alpaca instruction/input/response format, shared here in .jsonl format for easy use with libraries such as axolotl.",
"## Development process\n\n1. Used the openrecipes dataset to get a few hundred recipe URLs\n1. Used recipe-scrapers library to extract the ingredients and method steps when given a recipe URL (code).\n1. A custom GPT Assistant was written to generate the desired gantt charts as TSV files (albeit slowly and expensively) from simplified Ingredients, Method formatted recipes (code). A publicly accessible GPT version of the same assistant is here.\n1. Did a small amount of manual tweaking of the outputs to improve data quality before I lost my mind and moved on (code).\n\nFull details of dataset creation can be found here.",
"## Limitations\n\n- Size: I stopped at 288 examples because I didn't want to spend any more money on OpenAI credits (~£20). Otherwise, it would be very striaghtforward to scale this dataset.\n- Errors: being generated by GPT-4 there are errors in the outputs that I found, I only manually checked ~30 examples before deeming that the accuracy was sufficient for my needs.\n- You will notice that the Instructions are all identical. I made this decision as the dataset was so small - I was keen to make it as easy as possible for the model to understand the task when finetuning. It is redundant information though and if I had scaled this dataset larger I would have removed the 'input' field (as is valid with alpaca) and moved it to the 'instruction' field, replacing the boilerplate prompt."
] | [
"TAGS\n#task_categories-text2text-generation #language-English #license-mit #art #chemistry #food #recipes #region-us \n",
"## Summary\n\nA very small dataset of input recipes and output recipe gantt charts in TSV format where each column represents a method step and each row represents a single ingredient. Cells of the output TSV are populated with X if that ingredient is used in that step.\n\nIt was used to fine-tune pocasrocas/recipe-gantt-v0.1.",
"## Format\n\nIt follows the alpaca instruction/input/response format, shared here in .jsonl format for easy use with libraries such as axolotl.",
"## Development process\n\n1. Used the openrecipes dataset to get a few hundred recipe URLs\n1. Used recipe-scrapers library to extract the ingredients and method steps when given a recipe URL (code).\n1. A custom GPT Assistant was written to generate the desired gantt charts as TSV files (albeit slowly and expensively) from simplified Ingredients, Method formatted recipes (code). A publicly accessible GPT version of the same assistant is here.\n1. Did a small amount of manual tweaking of the outputs to improve data quality before I lost my mind and moved on (code).\n\nFull details of dataset creation can be found here.",
"## Limitations\n\n- Size: I stopped at 288 examples because I didn't want to spend any more money on OpenAI credits (~£20). Otherwise, it would be very striaghtforward to scale this dataset.\n- Errors: being generated by GPT-4 there are errors in the outputs that I found, I only manually checked ~30 examples before deeming that the accuracy was sufficient for my needs.\n- You will notice that the Instructions are all identical. I made this decision as the dataset was so small - I was keen to make it as easy as possible for the model to understand the task when finetuning. It is redundant information though and if I had scaled this dataset larger I would have removed the 'input' field (as is valid with alpaca) and moved it to the 'instruction' field, replacing the boilerplate prompt."
] | [
40,
85,
41,
147,
205
] | [
"passage: TAGS\n#task_categories-text2text-generation #language-English #license-mit #art #chemistry #food #recipes #region-us \n## Summary\n\nA very small dataset of input recipes and output recipe gantt charts in TSV format where each column represents a method step and each row represents a single ingredient. Cells of the output TSV are populated with X if that ingredient is used in that step.\n\nIt was used to fine-tune pocasrocas/recipe-gantt-v0.1.## Format\n\nIt follows the alpaca instruction/input/response format, shared here in .jsonl format for easy use with libraries such as axolotl.## Development process\n\n1. Used the openrecipes dataset to get a few hundred recipe URLs\n1. Used recipe-scrapers library to extract the ingredients and method steps when given a recipe URL (code).\n1. A custom GPT Assistant was written to generate the desired gantt charts as TSV files (albeit slowly and expensively) from simplified Ingredients, Method formatted recipes (code). A publicly accessible GPT version of the same assistant is here.\n1. Did a small amount of manual tweaking of the outputs to improve data quality before I lost my mind and moved on (code).\n\nFull details of dataset creation can be found here."
] |
1bcb1f01c49edeccea1c47b8cff74bb9763a40d0 |
# Dataset Card for Evaluation run of sumo43/Yi-34b-x2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [sumo43/Yi-34b-x2](https://huggingface.co/sumo43/Yi-34b-x2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_sumo43__Yi-34b-x2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T15:14:57.925776](https://huggingface.co/datasets/open-llm-leaderboard/details_sumo43__Yi-34b-x2/blob/main/results_2024-01-15T15-14-57.925776.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7619039072175088,
"acc_stderr": 0.028287211728702223,
"acc_norm": 0.7672823242487893,
"acc_norm_stderr": 0.028809880216771097,
"mc1": 0.5740514075887393,
"mc1_stderr": 0.01731047190407654,
"mc2": 0.7210377690522727,
"mc2_stderr": 0.014187472355015407
},
"harness|arc:challenge|25": {
"acc": 0.6979522184300341,
"acc_stderr": 0.013417519144716417,
"acc_norm": 0.7286689419795221,
"acc_norm_stderr": 0.012993807727545796
},
"harness|hellaswag|10": {
"acc": 0.665803624775941,
"acc_stderr": 0.004707447244200623,
"acc_norm": 0.8570005974905397,
"acc_norm_stderr": 0.0034935679140932928
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7481481481481481,
"acc_stderr": 0.03749850709174021,
"acc_norm": 0.7481481481481481,
"acc_norm_stderr": 0.03749850709174021
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.868421052631579,
"acc_stderr": 0.027508689533549912,
"acc_norm": 0.868421052631579,
"acc_norm_stderr": 0.027508689533549912
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8075471698113208,
"acc_stderr": 0.024262979839372274,
"acc_norm": 0.8075471698113208,
"acc_norm_stderr": 0.024262979839372274
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.026280550932848052,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.026280550932848052
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7572254335260116,
"acc_stderr": 0.0326926380614177,
"acc_norm": 0.7572254335260116,
"acc_norm_stderr": 0.0326926380614177
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5784313725490197,
"acc_stderr": 0.04913595201274503,
"acc_norm": 0.5784313725490197,
"acc_norm_stderr": 0.04913595201274503
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7702127659574468,
"acc_stderr": 0.027501752944412417,
"acc_norm": 0.7702127659574468,
"acc_norm_stderr": 0.027501752944412417
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5964912280701754,
"acc_stderr": 0.04615186962583707,
"acc_norm": 0.5964912280701754,
"acc_norm_stderr": 0.04615186962583707
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7241379310344828,
"acc_stderr": 0.037245636197746304,
"acc_norm": 0.7241379310344828,
"acc_norm_stderr": 0.037245636197746304
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.7116402116402116,
"acc_stderr": 0.023330654054535903,
"acc_norm": 0.7116402116402116,
"acc_norm_stderr": 0.023330654054535903
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.04360314860077459,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.04360314860077459
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9096774193548387,
"acc_stderr": 0.016306570644488313,
"acc_norm": 0.9096774193548387,
"acc_norm_stderr": 0.016306570644488313
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.645320197044335,
"acc_stderr": 0.03366124489051449,
"acc_norm": 0.645320197044335,
"acc_norm_stderr": 0.03366124489051449
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8666666666666667,
"acc_stderr": 0.026544435312706456,
"acc_norm": 0.8666666666666667,
"acc_norm_stderr": 0.026544435312706456
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9191919191919192,
"acc_stderr": 0.019417681889724536,
"acc_norm": 0.9191919191919192,
"acc_norm_stderr": 0.019417681889724536
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9689119170984456,
"acc_stderr": 0.012525310625527033,
"acc_norm": 0.9689119170984456,
"acc_norm_stderr": 0.012525310625527033
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8153846153846154,
"acc_stderr": 0.01967163241310029,
"acc_norm": 0.8153846153846154,
"acc_norm_stderr": 0.01967163241310029
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.45185185185185184,
"acc_stderr": 0.030343862998512623,
"acc_norm": 0.45185185185185184,
"acc_norm_stderr": 0.030343862998512623
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.02476290267805791,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.02476290267805791
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5099337748344371,
"acc_stderr": 0.04081677107248437,
"acc_norm": 0.5099337748344371,
"acc_norm_stderr": 0.04081677107248437
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9137614678899083,
"acc_stderr": 0.012035597300116245,
"acc_norm": 0.9137614678899083,
"acc_norm_stderr": 0.012035597300116245
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.0321495214780275,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.0321495214780275
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9166666666666666,
"acc_stderr": 0.019398452135813905,
"acc_norm": 0.9166666666666666,
"acc_norm_stderr": 0.019398452135813905
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9071729957805907,
"acc_stderr": 0.01888975055095671,
"acc_norm": 0.9071729957805907,
"acc_norm_stderr": 0.01888975055095671
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7982062780269058,
"acc_stderr": 0.02693611191280226,
"acc_norm": 0.7982062780269058,
"acc_norm_stderr": 0.02693611191280226
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8702290076335878,
"acc_stderr": 0.029473649496907065,
"acc_norm": 0.8702290076335878,
"acc_norm_stderr": 0.029473649496907065
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8925619834710744,
"acc_stderr": 0.028268812192540637,
"acc_norm": 0.8925619834710744,
"acc_norm_stderr": 0.028268812192540637
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8981481481481481,
"acc_stderr": 0.02923927267563275,
"acc_norm": 0.8981481481481481,
"acc_norm_stderr": 0.02923927267563275
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8650306748466258,
"acc_stderr": 0.026845765054553838,
"acc_norm": 0.8650306748466258,
"acc_norm_stderr": 0.026845765054553838
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8737864077669902,
"acc_stderr": 0.03288180278808628,
"acc_norm": 0.8737864077669902,
"acc_norm_stderr": 0.03288180278808628
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9444444444444444,
"acc_stderr": 0.015006312806446912,
"acc_norm": 0.9444444444444444,
"acc_norm_stderr": 0.015006312806446912
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.9,
"acc_stderr": 0.03015113445777634,
"acc_norm": 0.9,
"acc_norm_stderr": 0.03015113445777634
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9042145593869731,
"acc_stderr": 0.01052403107905584,
"acc_norm": 0.9042145593869731,
"acc_norm_stderr": 0.01052403107905584
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8121387283236994,
"acc_stderr": 0.021029269752423203,
"acc_norm": 0.8121387283236994,
"acc_norm_stderr": 0.021029269752423203
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7966480446927374,
"acc_stderr": 0.013461351487507524,
"acc_norm": 0.7966480446927374,
"acc_norm_stderr": 0.013461351487507524
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8562091503267973,
"acc_stderr": 0.020091188936043718,
"acc_norm": 0.8562091503267973,
"acc_norm_stderr": 0.020091188936043718
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8167202572347267,
"acc_stderr": 0.02197419884826582,
"acc_norm": 0.8167202572347267,
"acc_norm_stderr": 0.02197419884826582
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8672839506172839,
"acc_stderr": 0.018877353839571842,
"acc_norm": 0.8672839506172839,
"acc_norm_stderr": 0.018877353839571842
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6276595744680851,
"acc_stderr": 0.02883892147125145,
"acc_norm": 0.6276595744680851,
"acc_norm_stderr": 0.02883892147125145
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5808344198174706,
"acc_stderr": 0.012602244505788228,
"acc_norm": 0.5808344198174706,
"acc_norm_stderr": 0.012602244505788228
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8272058823529411,
"acc_stderr": 0.022966067585581774,
"acc_norm": 0.8272058823529411,
"acc_norm_stderr": 0.022966067585581774
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.815359477124183,
"acc_stderr": 0.01569702924075778,
"acc_norm": 0.815359477124183,
"acc_norm_stderr": 0.01569702924075778
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7363636363636363,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.7363636363636363,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8285714285714286,
"acc_stderr": 0.02412746346265015,
"acc_norm": 0.8285714285714286,
"acc_norm_stderr": 0.02412746346265015
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.900497512437811,
"acc_stderr": 0.021166216304659393,
"acc_norm": 0.900497512437811,
"acc_norm_stderr": 0.021166216304659393
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.572289156626506,
"acc_stderr": 0.038515976837185335,
"acc_norm": 0.572289156626506,
"acc_norm_stderr": 0.038515976837185335
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8947368421052632,
"acc_stderr": 0.023537557657892567,
"acc_norm": 0.8947368421052632,
"acc_norm_stderr": 0.023537557657892567
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5740514075887393,
"mc1_stderr": 0.01731047190407654,
"mc2": 0.7210377690522727,
"mc2_stderr": 0.014187472355015407
},
"harness|winogrande|5": {
"acc": 0.8279400157853196,
"acc_stderr": 0.010607731615247001
},
"harness|gsm8k|5": {
"acc": 0.6004548900682335,
"acc_stderr": 0.013491660298815988
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_sumo43__Yi-34b-x2 | [
"region:us"
] | 2024-01-15T15:17:11+00:00 | {"pretty_name": "Evaluation run of sumo43/Yi-34b-x2", "dataset_summary": "Dataset automatically created during the evaluation run of model [sumo43/Yi-34b-x2](https://huggingface.co/sumo43/Yi-34b-x2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sumo43__Yi-34b-x2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T15:14:57.925776](https://huggingface.co/datasets/open-llm-leaderboard/details_sumo43__Yi-34b-x2/blob/main/results_2024-01-15T15-14-57.925776.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7619039072175088,\n \"acc_stderr\": 0.028287211728702223,\n \"acc_norm\": 0.7672823242487893,\n \"acc_norm_stderr\": 0.028809880216771097,\n \"mc1\": 0.5740514075887393,\n \"mc1_stderr\": 0.01731047190407654,\n \"mc2\": 0.7210377690522727,\n \"mc2_stderr\": 0.014187472355015407\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6979522184300341,\n \"acc_stderr\": 0.013417519144716417,\n \"acc_norm\": 0.7286689419795221,\n \"acc_norm_stderr\": 0.012993807727545796\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.665803624775941,\n \"acc_stderr\": 0.004707447244200623,\n \"acc_norm\": 0.8570005974905397,\n \"acc_norm_stderr\": 0.0034935679140932928\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7481481481481481,\n \"acc_stderr\": 0.03749850709174021,\n \"acc_norm\": 0.7481481481481481,\n \"acc_norm_stderr\": 0.03749850709174021\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.868421052631579,\n \"acc_stderr\": 0.027508689533549912,\n \"acc_norm\": 0.868421052631579,\n \"acc_norm_stderr\": 0.027508689533549912\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8075471698113208,\n \"acc_stderr\": 0.024262979839372274,\n \"acc_norm\": 0.8075471698113208,\n \"acc_norm_stderr\": 0.024262979839372274\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.026280550932848052,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.026280550932848052\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7572254335260116,\n \"acc_stderr\": 0.0326926380614177,\n \"acc_norm\": 0.7572254335260116,\n \"acc_norm_stderr\": 0.0326926380614177\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5784313725490197,\n \"acc_stderr\": 0.04913595201274503,\n \"acc_norm\": 0.5784313725490197,\n \"acc_norm_stderr\": 0.04913595201274503\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7702127659574468,\n \"acc_stderr\": 0.027501752944412417,\n \"acc_norm\": 0.7702127659574468,\n \"acc_norm_stderr\": 0.027501752944412417\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5964912280701754,\n \"acc_stderr\": 0.04615186962583707,\n \"acc_norm\": 0.5964912280701754,\n \"acc_norm_stderr\": 0.04615186962583707\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7241379310344828,\n \"acc_stderr\": 0.037245636197746304,\n \"acc_norm\": 0.7241379310344828,\n \"acc_norm_stderr\": 0.037245636197746304\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.7116402116402116,\n \"acc_stderr\": 0.023330654054535903,\n \"acc_norm\": 0.7116402116402116,\n \"acc_norm_stderr\": 0.023330654054535903\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9096774193548387,\n \"acc_stderr\": 0.016306570644488313,\n \"acc_norm\": 0.9096774193548387,\n \"acc_norm_stderr\": 0.016306570644488313\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.645320197044335,\n \"acc_stderr\": 0.03366124489051449,\n \"acc_norm\": 0.645320197044335,\n \"acc_norm_stderr\": 0.03366124489051449\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8666666666666667,\n \"acc_stderr\": 0.026544435312706456,\n \"acc_norm\": 0.8666666666666667,\n \"acc_norm_stderr\": 0.026544435312706456\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9191919191919192,\n \"acc_stderr\": 0.019417681889724536,\n \"acc_norm\": 0.9191919191919192,\n \"acc_norm_stderr\": 0.019417681889724536\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9689119170984456,\n \"acc_stderr\": 0.012525310625527033,\n \"acc_norm\": 0.9689119170984456,\n \"acc_norm_stderr\": 0.012525310625527033\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8153846153846154,\n \"acc_stderr\": 0.01967163241310029,\n \"acc_norm\": 0.8153846153846154,\n \"acc_norm_stderr\": 0.01967163241310029\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.45185185185185184,\n \"acc_stderr\": 0.030343862998512623,\n \"acc_norm\": 0.45185185185185184,\n \"acc_norm_stderr\": 0.030343862998512623\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.02476290267805791,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.02476290267805791\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.5099337748344371,\n \"acc_stderr\": 0.04081677107248437,\n \"acc_norm\": 0.5099337748344371,\n \"acc_norm_stderr\": 0.04081677107248437\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9137614678899083,\n \"acc_stderr\": 0.012035597300116245,\n \"acc_norm\": 0.9137614678899083,\n \"acc_norm_stderr\": 0.012035597300116245\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.0321495214780275,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.0321495214780275\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9166666666666666,\n \"acc_stderr\": 0.019398452135813905,\n \"acc_norm\": 0.9166666666666666,\n \"acc_norm_stderr\": 0.019398452135813905\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9071729957805907,\n \"acc_stderr\": 0.01888975055095671,\n \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.01888975055095671\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7982062780269058,\n \"acc_stderr\": 0.02693611191280226,\n \"acc_norm\": 0.7982062780269058,\n \"acc_norm_stderr\": 0.02693611191280226\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8702290076335878,\n \"acc_stderr\": 0.029473649496907065,\n \"acc_norm\": 0.8702290076335878,\n \"acc_norm_stderr\": 0.029473649496907065\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8925619834710744,\n \"acc_stderr\": 0.028268812192540637,\n \"acc_norm\": 0.8925619834710744,\n \"acc_norm_stderr\": 0.028268812192540637\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8981481481481481,\n \"acc_stderr\": 0.02923927267563275,\n \"acc_norm\": 0.8981481481481481,\n \"acc_norm_stderr\": 0.02923927267563275\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8650306748466258,\n \"acc_stderr\": 0.026845765054553838,\n \"acc_norm\": 0.8650306748466258,\n \"acc_norm_stderr\": 0.026845765054553838\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8737864077669902,\n \"acc_stderr\": 0.03288180278808628,\n \"acc_norm\": 0.8737864077669902,\n \"acc_norm_stderr\": 0.03288180278808628\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9444444444444444,\n \"acc_stderr\": 0.015006312806446912,\n \"acc_norm\": 0.9444444444444444,\n \"acc_norm_stderr\": 0.015006312806446912\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.03015113445777634,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.03015113445777634\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9042145593869731,\n \"acc_stderr\": 0.01052403107905584,\n \"acc_norm\": 0.9042145593869731,\n \"acc_norm_stderr\": 0.01052403107905584\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8121387283236994,\n \"acc_stderr\": 0.021029269752423203,\n \"acc_norm\": 0.8121387283236994,\n \"acc_norm_stderr\": 0.021029269752423203\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7966480446927374,\n \"acc_stderr\": 0.013461351487507524,\n \"acc_norm\": 0.7966480446927374,\n \"acc_norm_stderr\": 0.013461351487507524\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8562091503267973,\n \"acc_stderr\": 0.020091188936043718,\n \"acc_norm\": 0.8562091503267973,\n \"acc_norm_stderr\": 0.020091188936043718\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8167202572347267,\n \"acc_stderr\": 0.02197419884826582,\n \"acc_norm\": 0.8167202572347267,\n \"acc_norm_stderr\": 0.02197419884826582\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8672839506172839,\n \"acc_stderr\": 0.018877353839571842,\n \"acc_norm\": 0.8672839506172839,\n \"acc_norm_stderr\": 0.018877353839571842\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6276595744680851,\n \"acc_stderr\": 0.02883892147125145,\n \"acc_norm\": 0.6276595744680851,\n \"acc_norm_stderr\": 0.02883892147125145\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5808344198174706,\n \"acc_stderr\": 0.012602244505788228,\n \"acc_norm\": 0.5808344198174706,\n \"acc_norm_stderr\": 0.012602244505788228\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8272058823529411,\n \"acc_stderr\": 0.022966067585581774,\n \"acc_norm\": 0.8272058823529411,\n \"acc_norm_stderr\": 0.022966067585581774\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.815359477124183,\n \"acc_stderr\": 0.01569702924075778,\n \"acc_norm\": 0.815359477124183,\n \"acc_norm_stderr\": 0.01569702924075778\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8285714285714286,\n \"acc_stderr\": 0.02412746346265015,\n \"acc_norm\": 0.8285714285714286,\n \"acc_norm_stderr\": 0.02412746346265015\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.900497512437811,\n \"acc_stderr\": 0.021166216304659393,\n \"acc_norm\": 0.900497512437811,\n \"acc_norm_stderr\": 0.021166216304659393\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8947368421052632,\n \"acc_stderr\": 0.023537557657892567,\n \"acc_norm\": 0.8947368421052632,\n \"acc_norm_stderr\": 0.023537557657892567\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5740514075887393,\n \"mc1_stderr\": 0.01731047190407654,\n \"mc2\": 0.7210377690522727,\n \"mc2_stderr\": 0.014187472355015407\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8279400157853196,\n \"acc_stderr\": 0.010607731615247001\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6004548900682335,\n \"acc_stderr\": 0.013491660298815988\n }\n}\n```", "repo_url": "https://huggingface.co/sumo43/Yi-34b-x2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|arc:challenge|25_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|gsm8k|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hellaswag|10_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T15-14-57.925776.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["**/details_harness|winogrande|5_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T15-14-57.925776.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T15_14_57.925776", "path": ["results_2024-01-15T15-14-57.925776.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T15-14-57.925776.parquet"]}]}]} | 2024-01-15T15:17:38+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of sumo43/Yi-34b-x2
Dataset automatically created during the evaluation run of model sumo43/Yi-34b-x2 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T15:14:57.925776(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of sumo43/Yi-34b-x2\n\n\n\nDataset automatically created during the evaluation run of model sumo43/Yi-34b-x2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T15:14:57.925776(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of sumo43/Yi-34b-x2\n\n\n\nDataset automatically created during the evaluation run of model sumo43/Yi-34b-x2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T15:14:57.925776(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
183,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of sumo43/Yi-34b-x2\n\n\n\nDataset automatically created during the evaluation run of model sumo43/Yi-34b-x2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T15:14:57.925776(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
dfe57acff5b62c23732a7b7d3e3fb84ff501708b | # XMarket Category to Product Retrieval Dataset
An ecommerce category to product retrieval dataset in multiple languages.
The data comes from:
https://xmrec.github.io/ | jinaai/xmarket_ml | [
"region:eu"
] | 2024-01-15T15:32:57+00:00 | {} | 2024-02-16T14:45:22+00:00 | [] | [] | TAGS
#region-eu
| # XMarket Category to Product Retrieval Dataset
An ecommerce category to product retrieval dataset in multiple languages.
The data comes from:
URL | [
"# XMarket Category to Product Retrieval Dataset\n\nAn ecommerce category to product retrieval dataset in multiple languages.\nThe data comes from:\nURL"
] | [
"TAGS\n#region-eu \n",
"# XMarket Category to Product Retrieval Dataset\n\nAn ecommerce category to product retrieval dataset in multiple languages.\nThe data comes from:\nURL"
] | [
6,
33
] | [
"passage: TAGS\n#region-eu \n# XMarket Category to Product Retrieval Dataset\n\nAn ecommerce category to product retrieval dataset in multiple languages.\nThe data comes from:\nURL"
] |
4003abf5653c0f360293932080690046e196efa0 | # Dataset Card for "oasst2_top1_en"
* Top 1% conversations of https://huggingface.co/datasets/OpenAssistant/oasst2
* language-filtered: en
* generated using https://github.com/blancsw/deep_4_all/blob/main/datasets/oasst/convert.py | g-ronimo/oasst2_top1_en | [
"license:apache-2.0",
"region:us"
] | 2024-01-15T15:35:01+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 10491824, "num_examples": 5419}], "download_size": 5658552, "dataset_size": 10491824}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-24T05:42:02+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| # Dataset Card for "oasst2_top1_en"
* Top 1% conversations of URL
* language-filtered: en
* generated using URL | [
"# Dataset Card for \"oasst2_top1_en\"\n\n* Top 1% conversations of URL\n* language-filtered: en\n* generated using URL"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Dataset Card for \"oasst2_top1_en\"\n\n* Top 1% conversations of URL\n* language-filtered: en\n* generated using URL"
] | [
14,
35
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n# Dataset Card for \"oasst2_top1_en\"\n\n* Top 1% conversations of URL\n* language-filtered: en\n* generated using URL"
] |
560d18e9b3e4eace177cf2ec212a6792ea636e2e | # Dataset Card for "nft_prediction_all_NFTs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hongerzh/nft_prediction_all_NFTs | [
"region:us"
] | 2024-01-15T16:16:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "float64"}, {"name": "sold_price", "dtype": "float64"}, {"name": "matching_speed", "dtype": "float64"}, {"name": "time", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 12223325706.04, "num_examples": 70256}, {"name": "validation", "num_bytes": 3664228516.045, "num_examples": 10035}, {"name": "test", "num_bytes": 3975494326.881, "num_examples": 20073}], "download_size": 16061854920, "dataset_size": 19863048548.966}} | 2024-01-15T18:26:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "nft_prediction_all_NFTs"
More Information needed | [
"# Dataset Card for \"nft_prediction_all_NFTs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"nft_prediction_all_NFTs\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"nft_prediction_all_NFTs\"\n\nMore Information needed"
] |
c2c5c2776af79b9d0a831a44195573a5d7213d63 | ## MIRACL Dataset
This dataset is a reformatted version of the original [MIRACL dataset](https://huggingface.co/datasets/miracl/miracl),
into the format expected for MTEB reranking tasks, limiting the language to Spanish only. | jinaai/miracl-es | [
"license:apache-2.0",
"region:eu"
] | 2024-01-15T16:19:19+00:00 | {"license": "apache-2.0"} | 2024-01-16T09:45:37+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-eu
| ## MIRACL Dataset
This dataset is a reformatted version of the original MIRACL dataset,
into the format expected for MTEB reranking tasks, limiting the language to Spanish only. | [
"## MIRACL Dataset\n\n\nThis dataset is a reformatted version of the original MIRACL dataset, \ninto the format expected for MTEB reranking tasks, limiting the language to Spanish only."
] | [
"TAGS\n#license-apache-2.0 #region-eu \n",
"## MIRACL Dataset\n\n\nThis dataset is a reformatted version of the original MIRACL dataset, \ninto the format expected for MTEB reranking tasks, limiting the language to Spanish only."
] | [
14,
45
] | [
"passage: TAGS\n#license-apache-2.0 #region-eu \n## MIRACL Dataset\n\n\nThis dataset is a reformatted version of the original MIRACL dataset, \ninto the format expected for MTEB reranking tasks, limiting the language to Spanish only."
] |
eb8ff375038236458efe76bd48a71ebf70b7b604 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | Abd-Dada/coffee | [
"region:us"
] | 2024-01-15T16:25:18+00:00 | {} | 2024-01-15T16:28:55+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
f28f8ed347c7d5efdcfd81325bd18423a767799f |
# Dataset Card for Evaluation run of shadowml/Mixolar-4x7b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [shadowml/Mixolar-4x7b](https://huggingface.co/shadowml/Mixolar-4x7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_shadowml__Mixolar-4x7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T16:33:09.510428](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__Mixolar-4x7b/blob/main/results_2024-01-15T16-33-09.510428.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6665388431359645,
"acc_stderr": 0.0316185747707982,
"acc_norm": 0.6674821725362098,
"acc_norm_stderr": 0.03226168085200764,
"mc1": 0.5679314565483476,
"mc1_stderr": 0.017341202394988327,
"mc2": 0.7180637595757683,
"mc2_stderr": 0.01502487134248928
},
"harness|arc:challenge|25": {
"acc": 0.6843003412969283,
"acc_stderr": 0.013582571095815291,
"acc_norm": 0.7107508532423208,
"acc_norm_stderr": 0.013250012579393441
},
"harness|hellaswag|10": {
"acc": 0.7133041226847242,
"acc_stderr": 0.004512940497462742,
"acc_norm": 0.8843855805616411,
"acc_norm_stderr": 0.003191084792793155
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.743421052631579,
"acc_stderr": 0.0355418036802569,
"acc_norm": 0.743421052631579,
"acc_norm_stderr": 0.0355418036802569
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.02872750295788027,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.02872750295788027
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.05021167315686779,
"acc_norm": 0.52,
"acc_norm_stderr": 0.05021167315686779
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6340425531914894,
"acc_stderr": 0.031489558297455304,
"acc_norm": 0.6340425531914894,
"acc_norm_stderr": 0.031489558297455304
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6344827586206897,
"acc_stderr": 0.040131241954243856,
"acc_norm": 0.6344827586206897,
"acc_norm_stderr": 0.040131241954243856
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4947089947089947,
"acc_stderr": 0.02574986828855657,
"acc_norm": 0.4947089947089947,
"acc_norm_stderr": 0.02574986828855657
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8193548387096774,
"acc_stderr": 0.021886178567172534,
"acc_norm": 0.8193548387096774,
"acc_norm_stderr": 0.021886178567172534
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.806060606060606,
"acc_stderr": 0.03087414513656209,
"acc_norm": 0.806060606060606,
"acc_norm_stderr": 0.03087414513656209
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.024063156416822516,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.024063156416822516
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603347,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603347
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402538,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402538
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.02944316932303154,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.02944316932303154
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7100840336134454,
"acc_stderr": 0.029472485833136094,
"acc_norm": 0.7100840336134454,
"acc_norm_stderr": 0.029472485833136094
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242741,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242741
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5740740740740741,
"acc_stderr": 0.033723432716530624,
"acc_norm": 0.5740740740740741,
"acc_norm_stderr": 0.033723432716530624
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8578431372549019,
"acc_stderr": 0.02450980392156862,
"acc_norm": 0.8578431372549019,
"acc_norm_stderr": 0.02450980392156862
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8523206751054853,
"acc_stderr": 0.0230943295825957,
"acc_norm": 0.8523206751054853,
"acc_norm_stderr": 0.0230943295825957
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306086,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306086
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228733,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228733
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.034089978868575295,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.034089978868575295
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.03492606476623791,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.03492606476623791
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.0230866350868414,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.0230866350868414
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8084291187739464,
"acc_stderr": 0.014072859310451949,
"acc_norm": 0.8084291187739464,
"acc_norm_stderr": 0.014072859310451949
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7601156069364162,
"acc_stderr": 0.022989592543123567,
"acc_norm": 0.7601156069364162,
"acc_norm_stderr": 0.022989592543123567
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39553072625698327,
"acc_stderr": 0.016353415410075775,
"acc_norm": 0.39553072625698327,
"acc_norm_stderr": 0.016353415410075775
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7266881028938906,
"acc_stderr": 0.025311765975426122,
"acc_norm": 0.7266881028938906,
"acc_norm_stderr": 0.025311765975426122
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7808641975308642,
"acc_stderr": 0.023016705640262196,
"acc_norm": 0.7808641975308642,
"acc_norm_stderr": 0.023016705640262196
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.49645390070921985,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.49645390070921985,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4934810951760104,
"acc_stderr": 0.012769150688867503,
"acc_norm": 0.4934810951760104,
"acc_norm_stderr": 0.012769150688867503
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7426470588235294,
"acc_stderr": 0.02655651947004151,
"acc_norm": 0.7426470588235294,
"acc_norm_stderr": 0.02655651947004151
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6813725490196079,
"acc_stderr": 0.01885008469646872,
"acc_norm": 0.6813725490196079,
"acc_norm_stderr": 0.01885008469646872
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784593,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784593
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616913,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616913
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466125,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466125
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598052,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598052
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.783625730994152,
"acc_stderr": 0.03158149539338733,
"acc_norm": 0.783625730994152,
"acc_norm_stderr": 0.03158149539338733
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5679314565483476,
"mc1_stderr": 0.017341202394988327,
"mc2": 0.7180637595757683,
"mc2_stderr": 0.01502487134248928
},
"harness|winogrande|5": {
"acc": 0.8358326756116812,
"acc_stderr": 0.010410849775222789
},
"harness|gsm8k|5": {
"acc": 0.6391205458680819,
"acc_stderr": 0.013228626753925147
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_shadowml__Mixolar-4x7b | [
"region:us"
] | 2024-01-15T16:35:25+00:00 | {"pretty_name": "Evaluation run of shadowml/Mixolar-4x7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [shadowml/Mixolar-4x7b](https://huggingface.co/shadowml/Mixolar-4x7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_shadowml__Mixolar-4x7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T16:33:09.510428](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__Mixolar-4x7b/blob/main/results_2024-01-15T16-33-09.510428.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6665388431359645,\n \"acc_stderr\": 0.0316185747707982,\n \"acc_norm\": 0.6674821725362098,\n \"acc_norm_stderr\": 0.03226168085200764,\n \"mc1\": 0.5679314565483476,\n \"mc1_stderr\": 0.017341202394988327,\n \"mc2\": 0.7180637595757683,\n \"mc2_stderr\": 0.01502487134248928\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6843003412969283,\n \"acc_stderr\": 0.013582571095815291,\n \"acc_norm\": 0.7107508532423208,\n \"acc_norm_stderr\": 0.013250012579393441\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7133041226847242,\n \"acc_stderr\": 0.004512940497462742,\n \"acc_norm\": 0.8843855805616411,\n \"acc_norm_stderr\": 0.003191084792793155\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.743421052631579,\n \"acc_stderr\": 0.0355418036802569,\n \"acc_norm\": 0.743421052631579,\n \"acc_norm_stderr\": 0.0355418036802569\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.02872750295788027,\n \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.02872750295788027\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.05021167315686779\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6340425531914894,\n \"acc_stderr\": 0.031489558297455304,\n \"acc_norm\": 0.6340425531914894,\n \"acc_norm_stderr\": 0.031489558297455304\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603347,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603347\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402538,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402538\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37037037037037035,\n \"acc_stderr\": 0.02944316932303154,\n \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.02944316932303154\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7100840336134454,\n \"acc_stderr\": 0.029472485833136094,\n \"acc_norm\": 0.7100840336134454,\n \"acc_norm_stderr\": 0.029472485833136094\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5740740740740741,\n \"acc_stderr\": 0.033723432716530624,\n \"acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.033723432716530624\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8523206751054853,\n \"acc_stderr\": 0.0230943295825957,\n \"acc_norm\": 0.8523206751054853,\n \"acc_norm_stderr\": 0.0230943295825957\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.034089978868575295,\n \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.034089978868575295\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8084291187739464,\n \"acc_stderr\": 0.014072859310451949,\n \"acc_norm\": 0.8084291187739464,\n \"acc_norm_stderr\": 0.014072859310451949\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7601156069364162,\n \"acc_stderr\": 0.022989592543123567,\n \"acc_norm\": 0.7601156069364162,\n \"acc_norm_stderr\": 0.022989592543123567\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39553072625698327,\n \"acc_stderr\": 0.016353415410075775,\n \"acc_norm\": 0.39553072625698327,\n \"acc_norm_stderr\": 0.016353415410075775\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7266881028938906,\n \"acc_stderr\": 0.025311765975426122,\n \"acc_norm\": 0.7266881028938906,\n \"acc_norm_stderr\": 0.025311765975426122\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7808641975308642,\n \"acc_stderr\": 0.023016705640262196,\n \"acc_norm\": 0.7808641975308642,\n \"acc_norm_stderr\": 0.023016705640262196\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4934810951760104,\n \"acc_stderr\": 0.012769150688867503,\n \"acc_norm\": 0.4934810951760104,\n \"acc_norm_stderr\": 0.012769150688867503\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7426470588235294,\n \"acc_stderr\": 0.02655651947004151,\n \"acc_norm\": 0.7426470588235294,\n \"acc_norm_stderr\": 0.02655651947004151\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6813725490196079,\n \"acc_stderr\": 0.01885008469646872,\n \"acc_norm\": 0.6813725490196079,\n \"acc_norm_stderr\": 0.01885008469646872\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784593,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784593\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.03158149539338733,\n \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.03158149539338733\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5679314565483476,\n \"mc1_stderr\": 0.017341202394988327,\n \"mc2\": 0.7180637595757683,\n \"mc2_stderr\": 0.01502487134248928\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8358326756116812,\n \"acc_stderr\": 0.010410849775222789\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6391205458680819,\n \"acc_stderr\": 0.013228626753925147\n }\n}\n```", "repo_url": "https://huggingface.co/shadowml/Mixolar-4x7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|arc:challenge|25_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|gsm8k|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hellaswag|10_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T16-33-09.510428.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["**/details_harness|winogrande|5_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T16-33-09.510428.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T16_33_09.510428", "path": ["results_2024-01-15T16-33-09.510428.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T16-33-09.510428.parquet"]}]}]} | 2024-01-15T16:35:46+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of shadowml/Mixolar-4x7b
Dataset automatically created during the evaluation run of model shadowml/Mixolar-4x7b on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T16:33:09.510428(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of shadowml/Mixolar-4x7b\n\n\n\nDataset automatically created during the evaluation run of model shadowml/Mixolar-4x7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T16:33:09.510428(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of shadowml/Mixolar-4x7b\n\n\n\nDataset automatically created during the evaluation run of model shadowml/Mixolar-4x7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T16:33:09.510428(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
181,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of shadowml/Mixolar-4x7b\n\n\n\nDataset automatically created during the evaluation run of model shadowml/Mixolar-4x7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T16:33:09.510428(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
299b1a95bf25a953b7af2f71dacb39fa7a5299c5 |
This is the dataset being used by MINERVA and LLEMMA in https://huggingface.co/EleutherAI/llemma_7b. This contains a subset of STEM subjects defined in MMLU by MINERVA.
The included subjects are
- 'abstract_algebra',
- 'anatomy',
- 'astronomy',
- 'college_biology',
- 'college_chemistry',
- 'college_computer_science',
- 'college_mathematics',
- 'college_physics',
- 'computer_security',
- 'conceptual_physics',
- 'electrical_engineering',
- 'elementary_mathematics',
- 'high_school_biology',
- 'high_school_chemistry',
- 'high_school_computer_science',
- 'high_school_mathematics',
- 'high_school_physics',
- 'high_school_statistics',
- 'machine_learning'
Please cite the original MMLU paper when you are using it. | TIGER-Lab/MMLU-STEM | [
"license:mit",
"region:us"
] | 2024-01-15T16:45:00+00:00 | {"license": "mit"} | 2024-01-15T16:53:11+00:00 | [] | [] | TAGS
#license-mit #region-us
|
This is the dataset being used by MINERVA and LLEMMA in URL This contains a subset of STEM subjects defined in MMLU by MINERVA.
The included subjects are
- 'abstract_algebra',
- 'anatomy',
- 'astronomy',
- 'college_biology',
- 'college_chemistry',
- 'college_computer_science',
- 'college_mathematics',
- 'college_physics',
- 'computer_security',
- 'conceptual_physics',
- 'electrical_engineering',
- 'elementary_mathematics',
- 'high_school_biology',
- 'high_school_chemistry',
- 'high_school_computer_science',
- 'high_school_mathematics',
- 'high_school_physics',
- 'high_school_statistics',
- 'machine_learning'
Please cite the original MMLU paper when you are using it. | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
c1cb55e3fb2a08bef749c0de59500ef97ca582b6 |
# Augmented Clinical Notes
The Augmented Clinical Notes dataset is an extension of existing datasets containing 30,000 triplets from different sources:
- **Real clinical notes** (*[PMC-Patients](https://arxiv.org/abs/2202.13876)*): Clinical notes correspond to patient summaries from the PMC-Patients dataset, which are extracted from PubMed Central case studies.
- **Synthetic dialogues** (*[NoteChat](https://arxiv.org/abs/2310.15959)*): Synthetic patient-doctor conversations were generated from clinical notes using GPT 3.5.
- **Structured patient information** (*ours*): From clinical notes, we generate structured patient summaries using GPT-4 and a tailored medical information template (see details below).
This dataset was used to train [**MediNote-7B**](https://huggingface.co/AGBonnet/medinote-7b) and [**MediNote-13B**](https://huggingface.co/AGBonnet/medinote-13b), a set of clinical note generators fine-tuned from the [**MediTron**](https://huggingface.co/epfl-llm/meditron-7b) large language models.
Our full report is available [here](./report.pdf).
## Dataset Details
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Antoine Bonnet and Paul Boulenger
- **Language(s):** English only
- **Repository:** [EPFL-IC-Make-Team/ClinicalNotes](https://github.com/EPFL-IC-Make-Team/ClinicalNotes)
- **Paper:** *[MediNote: Automated Clinical Notes](report.pdf)*
## Dataset Creation
**Clinical notes**. Our primary source of clinical notes is *[PMC-Patients](https://arxiv.org/abs/2202.13876)*. This large-scale dataset contains 167K patient summaries extracted from open-access case studies published in PubMed Central. Each note encapsulates a detailed case presentation as written by a doctor, presenting a thorough summary encompassing the patient’s visit, medical history, symptoms, administered treatments, as well as the discharge summary and outcome of the intervention. These comprehensive case presentations offer a rich and diverse collection of medical scenarios, forming a robust foundation for our model training and evaluation.
**Synthetic dialogues**. Distribution of confidential patient-doctor conversations is forbidden, so no large scale dataset is publicly available for training. We circumvent the lack of real dialogue data by building upon [NoteChat](https://huggingface.co/datasets/akemiH/NoteChat), an extension of PMC-Patients with 167K synthetic patient-doctor conversations. Each dialogue transcript within the NoteChat dataset was generated from a clinical note by ChatGPT (version `gpt-3.5-turbo-0613`).
**Patient information**. We augment the PMC-Patients and NoteChat datasets by extracting structured patient information from the 30K longest clinical notes. To do so, we prompt GPT-4 (version `gpt-4-turbo-0613`) with zero-shot instructions, providing clinical notes and a structured template of patient medical information with feature definitions. This template, shown below, encapsulates crucial aspects of a clinical note such as the patient’s admission to a care center, medical history, current symptoms, as well as the doctor’s diagnosis and treatment plan.
The full data pipeline is shown below.
<p align="center">
<img width=70% src="data_pipeline.pdf" alt="Data pipeline" title="Data pipeline">
</p>
### Medical information template
Here is shown the medical template we used to structurize clinical notes. A JSON version is also available as `template_definitions.json`.
<p align="center">
<img width=70% src="template.pdf" alt="Data pipeline" title="Data pipeline">
</p>
### Dialogue Quality
The primary aim of synthetic dialogues is to distill comprehensive information from the case presentation, transforming it into a plausible and engaging conversation.
Newer versions of the dataset include higher quality dialogues generated by GPT-4 and NoteChat, a multi-agent dialogue generation pipeline (see the [NoteChat repository](https://github.com/believewhat/Dr.NoteAid) for more information).
Dialogues produced by ChatGPT tend to lack realism and frequently adhere to a pattern where the doctor poses a series of questions mirroring the facts from the original clinical notes, receiving simple ’Yes’ responses from the patient. Nevertheless, we decided to use ChatGPT dialogues as they were the only ones available during the training phase.
Clinical notes within NoteChat were truncated prior to the dialogue generation process. Consequently, the information lost due to truncation from the clinical note is also missing in the resulting dialogue. While complete notes were accessible from PMC-Patients, a conscious decision was made to fine-tune our models using truncated notes. This decision aimed at preventing our fine-tuned models from being inadvertently trained to hallucinate information towards the conclusion of a note. Notably, certain ChatGPT dialogues involving scenarios where a patient passes away and a subsequent dialogue with a family member commences revealed instances of prompt leaks. These leaks manifested as the prompt used for synthetic dialogue generation being inadvertently repeated within the dialogue.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Each row of the dataset represents one dialogue-summary-note triplet, and consists of the following dataset fields (all strings):
| Field | Description | Source |
|-|-|-|
| `idx` | Unique identifier, index in the original NoteChat-ChatGPT dataset | NoteChat |
| `note` | Clinical note used by NoteChat (possibly truncated) | NoteChat |
| `full_note` | Full clinical note | PMC-Patients |
| `conversation` | Patient-doctor dialogue | NoteChat |
| `summary`| Patient information summary (JSON) | ours |
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
While this dataset was originally used to fine-tune LLMs to extract structured patient information from dialogue, it can also be used for diverse applications in the healthcare domain, such as training models to extract comprehensive tabular patient features from clinical notes.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
- **Synthetic Data**: NoteChat dialogues were synthetically generated from clinical notes; they are not completely realistic and therefore fail to accurately represent real patient-doctor conversations. Real patient-doctor conversations are of course preferred, but their distribution is forbidden in the US by the [Health Insurance Portability and Accountability Act of 1996](https://www.cdc.gov/phlp/publications/topic/hipaa.html).
- **Representation**: PMC-Patients clinical notes have been extracted from English PubMed Central publications, and therefore over-represent clinical settings from English-speaking countries.
## Acknowledgments
We thank Prof. Mary-Anne Hartley for her advice on the appropriate template for structured medical patient summaries.
<!--
## Citation
If you use the Augmented Clinical Notes dataset, please cite out work:
```
ADD CITATION
```
--!> | AGBonnet/augmented-clinical-notes | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"medical",
"health",
"arxiv:2202.13876",
"arxiv:2310.15959",
"region:us"
] | 2024-01-15T16:45:19+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Augmented Clinical Notes", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "augmented_notes_30K.jsonl"}]}], "tags": ["medical", "health"], "dataset_info": {"features": [{"name": "idx", "dtype": "string"}, {"name": "note", "dtype": "string"}, {"name": "full_note", "dtype": "string"}, {"name": "conversation", "dtype": "string"}, {"name": "summary", "dtype": "string"}]}} | 2024-01-24T10:38:13+00:00 | [
"2202.13876",
"2310.15959"
] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #medical #health #arxiv-2202.13876 #arxiv-2310.15959 #region-us
| Augmented Clinical Notes
========================
The Augmented Clinical Notes dataset is an extension of existing datasets containing 30,000 triplets from different sources:
* Real clinical notes (*PMC-Patients*): Clinical notes correspond to patient summaries from the PMC-Patients dataset, which are extracted from PubMed Central case studies.
* Synthetic dialogues (*NoteChat*): Synthetic patient-doctor conversations were generated from clinical notes using GPT 3.5.
* Structured patient information (*ours*): From clinical notes, we generate structured patient summaries using GPT-4 and a tailored medical information template (see details below).
This dataset was used to train MediNote-7B and MediNote-13B, a set of clinical note generators fine-tuned from the MediTron large language models.
Our full report is available here.
Dataset Details
---------------
* Curated by: Antoine Bonnet and Paul Boulenger
* Language(s): English only
* Repository: EPFL-IC-Make-Team/ClinicalNotes
* Paper: *MediNote: Automated Clinical Notes*
Dataset Creation
----------------
Clinical notes. Our primary source of clinical notes is *PMC-Patients*. This large-scale dataset contains 167K patient summaries extracted from open-access case studies published in PubMed Central. Each note encapsulates a detailed case presentation as written by a doctor, presenting a thorough summary encompassing the patient’s visit, medical history, symptoms, administered treatments, as well as the discharge summary and outcome of the intervention. These comprehensive case presentations offer a rich and diverse collection of medical scenarios, forming a robust foundation for our model training and evaluation.
Synthetic dialogues. Distribution of confidential patient-doctor conversations is forbidden, so no large scale dataset is publicly available for training. We circumvent the lack of real dialogue data by building upon NoteChat, an extension of PMC-Patients with 167K synthetic patient-doctor conversations. Each dialogue transcript within the NoteChat dataset was generated from a clinical note by ChatGPT (version 'gpt-3.5-turbo-0613').
Patient information. We augment the PMC-Patients and NoteChat datasets by extracting structured patient information from the 30K longest clinical notes. To do so, we prompt GPT-4 (version 'gpt-4-turbo-0613') with zero-shot instructions, providing clinical notes and a structured template of patient medical information with feature definitions. This template, shown below, encapsulates crucial aspects of a clinical note such as the patient’s admission to a care center, medical history, current symptoms, as well as the doctor’s diagnosis and treatment plan.
The full data pipeline is shown below.

### Medical information template
Here is shown the medical template we used to structurize clinical notes. A JSON version is also available as 'template\_definitions.json'.

### Dialogue Quality
The primary aim of synthetic dialogues is to distill comprehensive information from the case presentation, transforming it into a plausible and engaging conversation.
Newer versions of the dataset include higher quality dialogues generated by GPT-4 and NoteChat, a multi-agent dialogue generation pipeline (see the NoteChat repository for more information).
Dialogues produced by ChatGPT tend to lack realism and frequently adhere to a pattern where the doctor poses a series of questions mirroring the facts from the original clinical notes, receiving simple ’Yes’ responses from the patient. Nevertheless, we decided to use ChatGPT dialogues as they were the only ones available during the training phase.
Clinical notes within NoteChat were truncated prior to the dialogue generation process. Consequently, the information lost due to truncation from the clinical note is also missing in the resulting dialogue. While complete notes were accessible from PMC-Patients, a conscious decision was made to fine-tune our models using truncated notes. This decision aimed at preventing our fine-tuned models from being inadvertently trained to hallucinate information towards the conclusion of a note. Notably, certain ChatGPT dialogues involving scenarios where a patient passes away and a subsequent dialogue with a family member commences revealed instances of prompt leaks. These leaks manifested as the prompt used for synthetic dialogue generation being inadvertently repeated within the dialogue.
Dataset Structure
-----------------
Each row of the dataset represents one dialogue-summary-note triplet, and consists of the following dataset fields (all strings):
Field: 'idx', Description: Unique identifier, index in the original NoteChat-ChatGPT dataset, Source: NoteChat
Field: 'note', Description: Clinical note used by NoteChat (possibly truncated), Source: NoteChat
Field: 'full\_note', Description: Full clinical note, Source: PMC-Patients
Field: 'conversation', Description: Patient-doctor dialogue, Source: NoteChat
Field: 'summary', Description: Patient information summary (JSON), Source: ours
Uses
----
While this dataset was originally used to fine-tune LLMs to extract structured patient information from dialogue, it can also be used for diverse applications in the healthcare domain, such as training models to extract comprehensive tabular patient features from clinical notes.
Bias, Risks, and Limitations
----------------------------
* Synthetic Data: NoteChat dialogues were synthetically generated from clinical notes; they are not completely realistic and therefore fail to accurately represent real patient-doctor conversations. Real patient-doctor conversations are of course preferred, but their distribution is forbidden in the US by the Health Insurance Portability and Accountability Act of 1996.
* Representation: PMC-Patients clinical notes have been extracted from English PubMed Central publications, and therefore over-represent clinical settings from English-speaking countries.
Acknowledgments
---------------
We thank Prof. Mary-Anne Hartley for her advice on the appropriate template for structured medical patient summaries.
<!--
If you use the Augmented Clinical Notes dataset, please cite out work:
--!> | [
"### Medical information template\n\n\nHere is shown the medical template we used to structurize clinical notes. A JSON version is also available as 'template\\_definitions.json'.\n\n\n\n",
"### Dialogue Quality\n\n\nThe primary aim of synthetic dialogues is to distill comprehensive information from the case presentation, transforming it into a plausible and engaging conversation.\nNewer versions of the dataset include higher quality dialogues generated by GPT-4 and NoteChat, a multi-agent dialogue generation pipeline (see the NoteChat repository for more information).\n\n\nDialogues produced by ChatGPT tend to lack realism and frequently adhere to a pattern where the doctor poses a series of questions mirroring the facts from the original clinical notes, receiving simple ’Yes’ responses from the patient. Nevertheless, we decided to use ChatGPT dialogues as they were the only ones available during the training phase.\n\n\nClinical notes within NoteChat were truncated prior to the dialogue generation process. Consequently, the information lost due to truncation from the clinical note is also missing in the resulting dialogue. While complete notes were accessible from PMC-Patients, a conscious decision was made to fine-tune our models using truncated notes. This decision aimed at preventing our fine-tuned models from being inadvertently trained to hallucinate information towards the conclusion of a note. Notably, certain ChatGPT dialogues involving scenarios where a patient passes away and a subsequent dialogue with a family member commences revealed instances of prompt leaks. These leaks manifested as the prompt used for synthetic dialogue generation being inadvertently repeated within the dialogue.\n\n\nDataset Structure\n-----------------\n\n\nEach row of the dataset represents one dialogue-summary-note triplet, and consists of the following dataset fields (all strings):\n\n\nField: 'idx', Description: Unique identifier, index in the original NoteChat-ChatGPT dataset, Source: NoteChat\nField: 'note', Description: Clinical note used by NoteChat (possibly truncated), Source: NoteChat\nField: 'full\\_note', Description: Full clinical note, Source: PMC-Patients\nField: 'conversation', Description: Patient-doctor dialogue, Source: NoteChat\nField: 'summary', Description: Patient information summary (JSON), Source: ours\n\n\nUses\n----\n\n\nWhile this dataset was originally used to fine-tune LLMs to extract structured patient information from dialogue, it can also be used for diverse applications in the healthcare domain, such as training models to extract comprehensive tabular patient features from clinical notes.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\n* Synthetic Data: NoteChat dialogues were synthetically generated from clinical notes; they are not completely realistic and therefore fail to accurately represent real patient-doctor conversations. Real patient-doctor conversations are of course preferred, but their distribution is forbidden in the US by the Health Insurance Portability and Accountability Act of 1996.\n* Representation: PMC-Patients clinical notes have been extracted from English PubMed Central publications, and therefore over-represent clinical settings from English-speaking countries.\n\n\nAcknowledgments\n---------------\n\n\nWe thank Prof. Mary-Anne Hartley for her advice on the appropriate template for structured medical patient summaries.\n\n\n<!--\nIf you use the Augmented Clinical Notes dataset, please cite out work:\n\n\n--!>"
] | [
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #medical #health #arxiv-2202.13876 #arxiv-2310.15959 #region-us \n",
"### Medical information template\n\n\nHere is shown the medical template we used to structurize clinical notes. A JSON version is also available as 'template\\_definitions.json'.\n\n\n\n",
"### Dialogue Quality\n\n\nThe primary aim of synthetic dialogues is to distill comprehensive information from the case presentation, transforming it into a plausible and engaging conversation.\nNewer versions of the dataset include higher quality dialogues generated by GPT-4 and NoteChat, a multi-agent dialogue generation pipeline (see the NoteChat repository for more information).\n\n\nDialogues produced by ChatGPT tend to lack realism and frequently adhere to a pattern where the doctor poses a series of questions mirroring the facts from the original clinical notes, receiving simple ’Yes’ responses from the patient. Nevertheless, we decided to use ChatGPT dialogues as they were the only ones available during the training phase.\n\n\nClinical notes within NoteChat were truncated prior to the dialogue generation process. Consequently, the information lost due to truncation from the clinical note is also missing in the resulting dialogue. While complete notes were accessible from PMC-Patients, a conscious decision was made to fine-tune our models using truncated notes. This decision aimed at preventing our fine-tuned models from being inadvertently trained to hallucinate information towards the conclusion of a note. Notably, certain ChatGPT dialogues involving scenarios where a patient passes away and a subsequent dialogue with a family member commences revealed instances of prompt leaks. These leaks manifested as the prompt used for synthetic dialogue generation being inadvertently repeated within the dialogue.\n\n\nDataset Structure\n-----------------\n\n\nEach row of the dataset represents one dialogue-summary-note triplet, and consists of the following dataset fields (all strings):\n\n\nField: 'idx', Description: Unique identifier, index in the original NoteChat-ChatGPT dataset, Source: NoteChat\nField: 'note', Description: Clinical note used by NoteChat (possibly truncated), Source: NoteChat\nField: 'full\\_note', Description: Full clinical note, Source: PMC-Patients\nField: 'conversation', Description: Patient-doctor dialogue, Source: NoteChat\nField: 'summary', Description: Patient information summary (JSON), Source: ours\n\n\nUses\n----\n\n\nWhile this dataset was originally used to fine-tune LLMs to extract structured patient information from dialogue, it can also be used for diverse applications in the healthcare domain, such as training models to extract comprehensive tabular patient features from clinical notes.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\n* Synthetic Data: NoteChat dialogues were synthetically generated from clinical notes; they are not completely realistic and therefore fail to accurately represent real patient-doctor conversations. Real patient-doctor conversations are of course preferred, but their distribution is forbidden in the US by the Health Insurance Portability and Accountability Act of 1996.\n* Representation: PMC-Patients clinical notes have been extracted from English PubMed Central publications, and therefore over-represent clinical settings from English-speaking countries.\n\n\nAcknowledgments\n---------------\n\n\nWe thank Prof. Mary-Anne Hartley for her advice on the appropriate template for structured medical patient summaries.\n\n\n<!--\nIf you use the Augmented Clinical Notes dataset, please cite out work:\n\n\n--!>"
] | [
60,
53,
744
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #medical #health #arxiv-2202.13876 #arxiv-2310.15959 #region-us \n### Medical information template\n\n\nHere is shown the medical template we used to structurize clinical notes. A JSON version is also available as 'template\\_definitions.json'.\n\n\n\n"
] |
1e3b7a658152524190258822e8c1ad20168c76f4 | CommitBench: A Benchmark for Commit Message Generation
We provide CommitBench as an open-source, reproducible and privacy- and license-aware benchmark for commit message generation. The dataset is gathered from GitHub repositories with licenses that permit redistribution. We provide six programming languages, Java, Python, Go, JavaScript, PHP, and Ruby. The commit messages in natural language are restricted to English, as it is the working language in many software development projects. The dataset has 1,664,590 examples that were generated by using extensive quality-focused filtering techniques (e.g., excluding bot commits). Additionally, we provide a version with longer sequences for benchmarking models with more extended sequence input.Ω | Maxscha/commitbench_long | [
"task_categories:translation",
"task_categories:summarization",
"language:en",
"license:cc",
"region:us"
] | 2024-01-15T16:57:52+00:00 | {"language": ["en"], "license": "cc", "task_categories": ["translation", "summarization"], "pretty_name": "CommitBench"} | 2024-02-14T11:21:07+00:00 | [] | [
"en"
] | TAGS
#task_categories-translation #task_categories-summarization #language-English #license-cc #region-us
| CommitBench: A Benchmark for Commit Message Generation
We provide CommitBench as an open-source, reproducible and privacy- and license-aware benchmark for commit message generation. The dataset is gathered from GitHub repositories with licenses that permit redistribution. We provide six programming languages, Java, Python, Go, JavaScript, PHP, and Ruby. The commit messages in natural language are restricted to English, as it is the working language in many software development projects. The dataset has 1,664,590 examples that were generated by using extensive quality-focused filtering techniques (e.g., excluding bot commits). Additionally, we provide a version with longer sequences for benchmarking models with more extended sequence input.Ω | [] | [
"TAGS\n#task_categories-translation #task_categories-summarization #language-English #license-cc #region-us \n"
] | [
34
] | [
"passage: TAGS\n#task_categories-translation #task_categories-summarization #language-English #license-cc #region-us \n"
] |
9986f96eef682dc39aee8d9cccf4cd481a8aef26 |
# Dataset Card for Wildberries questions
### Dataset Summary
This is a dataset of questions and answers scraped from product pages from the Russian marketplace [Wildberries](https://www.wildberries.ru). Dataset contains all questions and answers, as well as all metadata from the API. However, the "productName" field may be empty in some cases because the API does not return the name for old products.
### Languages
The dataset is mostly in Russian, but there may be other languages present.
## Dataset Structure
### Data Fields
This dataset consists of the following fields:
- `imtId` - An identifier for the item (integer)
- `nmId` - A numeric identifier associated with the item (integer)
- `productName` - The name of the product (string, can be empty)
- `supplierArticle` - The article number provided by the supplier (string)
- `supplierId` - The identifier for the supplier (integer)
- `supplierName` - The name of the supplier (string)
- `brandName` - The name of the brand (string)
- `question` - The customer's question regarding the product (string)
- `answer` - The provided answer to the question (string)
### Data Splits
All 7410007 examples are in the train split, there is no validation split.
## Additional Information
### License
This dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can:
* Use it for any purpose, including commercial projects.
* Modify it however you like.
* Distribute it without asking permission.
No attribution is required, but it's always appreciated!
CC0 license: https://creativecommons.org/publicdomain/zero/1.0/deed.en
To learn more about CC0, visit the Creative Commons website: https://creativecommons.org/publicdomain/zero/1.0/
### Dataset Curators
- [nyuuzyou](https://ducks.party) | nyuuzyou/wb-questions | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ru",
"license:cc0-1.0",
"region:us"
] | 2024-01-15T17:13:19+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ru"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "question-answering"], "task_ids": ["language-modeling", "open-domain-qa"], "pretty_name": "Wildberries Q&A"} | 2024-01-15T17:30:54+00:00 | [] | [
"ru"
] | TAGS
#task_categories-text-generation #task_categories-question-answering #task_ids-language-modeling #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Russian #license-cc0-1.0 #region-us
|
# Dataset Card for Wildberries questions
### Dataset Summary
This is a dataset of questions and answers scraped from product pages from the Russian marketplace Wildberries. Dataset contains all questions and answers, as well as all metadata from the API. However, the "productName" field may be empty in some cases because the API does not return the name for old products.
### Languages
The dataset is mostly in Russian, but there may be other languages present.
## Dataset Structure
### Data Fields
This dataset consists of the following fields:
- 'imtId' - An identifier for the item (integer)
- 'nmId' - A numeric identifier associated with the item (integer)
- 'productName' - The name of the product (string, can be empty)
- 'supplierArticle' - The article number provided by the supplier (string)
- 'supplierId' - The identifier for the supplier (integer)
- 'supplierName' - The name of the supplier (string)
- 'brandName' - The name of the brand (string)
- 'question' - The customer's question regarding the product (string)
- 'answer' - The provided answer to the question (string)
### Data Splits
All 7410007 examples are in the train split, there is no validation split.
## Additional Information
### License
This dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can:
* Use it for any purpose, including commercial projects.
* Modify it however you like.
* Distribute it without asking permission.
No attribution is required, but it's always appreciated!
CC0 license: URL
To learn more about CC0, visit the Creative Commons website: URL
### Dataset Curators
- nyuuzyou | [
"# Dataset Card for Wildberries questions",
"### Dataset Summary\n\nThis is a dataset of questions and answers scraped from product pages from the Russian marketplace Wildberries. Dataset contains all questions and answers, as well as all metadata from the API. However, the \"productName\" field may be empty in some cases because the API does not return the name for old products.",
"### Languages\n\nThe dataset is mostly in Russian, but there may be other languages present.",
"## Dataset Structure",
"### Data Fields\n\nThis dataset consists of the following fields:\n\n- 'imtId' - An identifier for the item (integer)\n- 'nmId' - A numeric identifier associated with the item (integer)\n- 'productName' - The name of the product (string, can be empty)\n- 'supplierArticle' - The article number provided by the supplier (string)\n- 'supplierId' - The identifier for the supplier (integer)\n- 'supplierName' - The name of the supplier (string)\n- 'brandName' - The name of the brand (string)\n- 'question' - The customer's question regarding the product (string)\n- 'answer' - The provided answer to the question (string)",
"### Data Splits\n\nAll 7410007 examples are in the train split, there is no validation split.",
"## Additional Information",
"### License\n\nThis dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can:\n\n* Use it for any purpose, including commercial projects.\n* Modify it however you like.\n* Distribute it without asking permission.\n\nNo attribution is required, but it's always appreciated!\n\nCC0 license: URL\n\nTo learn more about CC0, visit the Creative Commons website: URL",
"### Dataset Curators\n\n- nyuuzyou"
] | [
"TAGS\n#task_categories-text-generation #task_categories-question-answering #task_ids-language-modeling #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Russian #license-cc0-1.0 #region-us \n",
"# Dataset Card for Wildberries questions",
"### Dataset Summary\n\nThis is a dataset of questions and answers scraped from product pages from the Russian marketplace Wildberries. Dataset contains all questions and answers, as well as all metadata from the API. However, the \"productName\" field may be empty in some cases because the API does not return the name for old products.",
"### Languages\n\nThe dataset is mostly in Russian, but there may be other languages present.",
"## Dataset Structure",
"### Data Fields\n\nThis dataset consists of the following fields:\n\n- 'imtId' - An identifier for the item (integer)\n- 'nmId' - A numeric identifier associated with the item (integer)\n- 'productName' - The name of the product (string, can be empty)\n- 'supplierArticle' - The article number provided by the supplier (string)\n- 'supplierId' - The identifier for the supplier (integer)\n- 'supplierName' - The name of the supplier (string)\n- 'brandName' - The name of the brand (string)\n- 'question' - The customer's question regarding the product (string)\n- 'answer' - The provided answer to the question (string)",
"### Data Splits\n\nAll 7410007 examples are in the train split, there is no validation split.",
"## Additional Information",
"### License\n\nThis dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can:\n\n* Use it for any purpose, including commercial projects.\n* Modify it however you like.\n* Distribute it without asking permission.\n\nNo attribution is required, but it's always appreciated!\n\nCC0 license: URL\n\nTo learn more about CC0, visit the Creative Commons website: URL",
"### Dataset Curators\n\n- nyuuzyou"
] | [
116,
9,
75,
21,
6,
173,
24,
5,
87,
11
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-question-answering #task_ids-language-modeling #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Russian #license-cc0-1.0 #region-us \n# Dataset Card for Wildberries questions### Dataset Summary\n\nThis is a dataset of questions and answers scraped from product pages from the Russian marketplace Wildberries. Dataset contains all questions and answers, as well as all metadata from the API. However, the \"productName\" field may be empty in some cases because the API does not return the name for old products.### Languages\n\nThe dataset is mostly in Russian, but there may be other languages present.## Dataset Structure### Data Fields\n\nThis dataset consists of the following fields:\n\n- 'imtId' - An identifier for the item (integer)\n- 'nmId' - A numeric identifier associated with the item (integer)\n- 'productName' - The name of the product (string, can be empty)\n- 'supplierArticle' - The article number provided by the supplier (string)\n- 'supplierId' - The identifier for the supplier (integer)\n- 'supplierName' - The name of the supplier (string)\n- 'brandName' - The name of the brand (string)\n- 'question' - The customer's question regarding the product (string)\n- 'answer' - The provided answer to the question (string)### Data Splits\n\nAll 7410007 examples are in the train split, there is no validation split.## Additional Information"
] |
44effb7f2d983cd25e17c852280eea0c8a0a64e6 |
# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-Solar-Instruct
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Weyaxi/HelpSteer-filtered-Solar-Instruct](https://huggingface.co/Weyaxi/HelpSteer-filtered-Solar-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-Solar-Instruct",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T17:25:12.047837](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-Solar-Instruct/blob/main/results_2024-01-15T17-25-12.047837.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6427219840393912,
"acc_stderr": 0.0319380448227668,
"acc_norm": 0.6461776033755803,
"acc_norm_stderr": 0.032576281401967964,
"mc1": 0.33047735618115054,
"mc1_stderr": 0.0164667696136983,
"mc2": 0.46234588692773587,
"mc2_stderr": 0.015244437978733195
},
"harness|arc:challenge|25": {
"acc": 0.5878839590443686,
"acc_stderr": 0.014383915302225407,
"acc_norm": 0.6313993174061433,
"acc_norm_stderr": 0.0140978106780422
},
"harness|hellaswag|10": {
"acc": 0.6395140410276837,
"acc_stderr": 0.0047916019756127646,
"acc_norm": 0.8305118502290381,
"acc_norm_stderr": 0.0037441574425365583
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.03738520676119668,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.03738520676119668
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.028727502957880274,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.028727502957880274
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.04576665403207763,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.04576665403207763
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5872340425531914,
"acc_stderr": 0.03218471141400351,
"acc_norm": 0.5872340425531914,
"acc_norm_stderr": 0.03218471141400351
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.046774730044912,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.046774730044912
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6137931034482759,
"acc_stderr": 0.04057324734419035,
"acc_norm": 0.6137931034482759,
"acc_norm_stderr": 0.04057324734419035
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.025467149045469536,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.025467149045469536
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.04403438954768177,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.04403438954768177
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723295,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723295
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.43349753694581283,
"acc_stderr": 0.03486731727419872,
"acc_norm": 0.43349753694581283,
"acc_norm_stderr": 0.03486731727419872
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526094,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526094
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.031922715695482995,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.031922715695482995
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8535353535353535,
"acc_stderr": 0.025190921114603915,
"acc_norm": 0.8535353535353535,
"acc_norm_stderr": 0.025190921114603915
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.021995311963644234,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.021995311963644234
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402534,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402534
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.02925290592725198,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.02925290592725198
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.030066761582977934,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.030066761582977934
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590175,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590175
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.03392238405321617,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.03392238405321617
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8578431372549019,
"acc_stderr": 0.0245098039215686,
"acc_norm": 0.8578431372549019,
"acc_norm_stderr": 0.0245098039215686
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8270042194092827,
"acc_stderr": 0.02462156286676842,
"acc_norm": 0.8270042194092827,
"acc_norm_stderr": 0.02462156286676842
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7130044843049327,
"acc_stderr": 0.030360379710291936,
"acc_norm": 0.7130044843049327,
"acc_norm_stderr": 0.030360379710291936
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.03880848301082396,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.03880848301082396
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.04726835553719099,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.04726835553719099
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.01389086216287617,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.01389086216287617
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7341040462427746,
"acc_stderr": 0.023786203255508287,
"acc_norm": 0.7341040462427746,
"acc_norm_stderr": 0.023786203255508287
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.21452513966480447,
"acc_stderr": 0.013728923407828829,
"acc_norm": 0.21452513966480447,
"acc_norm_stderr": 0.013728923407828829
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.02495418432487991,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.02495418432487991
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.025583062489984813,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.025583062489984813
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.023468429832451145,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.023468429832451145
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5106382978723404,
"acc_stderr": 0.02982074719142244,
"acc_norm": 0.5106382978723404,
"acc_norm_stderr": 0.02982074719142244
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.485006518904824,
"acc_stderr": 0.012764493202193253,
"acc_norm": 0.485006518904824,
"acc_norm_stderr": 0.012764493202193253
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7169117647058824,
"acc_stderr": 0.027365861131513812,
"acc_norm": 0.7169117647058824,
"acc_norm_stderr": 0.027365861131513812
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.019070985589687492,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.019070985589687492
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169136,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169136
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.02954774168764004,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.02954774168764004
},
"harness|truthfulqa:mc|0": {
"mc1": 0.33047735618115054,
"mc1_stderr": 0.0164667696136983,
"mc2": 0.46234588692773587,
"mc2_stderr": 0.015244437978733195
},
"harness|winogrande|5": {
"acc": 0.8058405682715075,
"acc_stderr": 0.011116983392392664
},
"harness|gsm8k|5": {
"acc": 0.510235026535254,
"acc_stderr": 0.013769598923012384
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-Solar-Instruct | [
"region:us"
] | 2024-01-15T17:27:28+00:00 | {"pretty_name": "Evaluation run of Weyaxi/HelpSteer-filtered-Solar-Instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/HelpSteer-filtered-Solar-Instruct](https://huggingface.co/Weyaxi/HelpSteer-filtered-Solar-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-Solar-Instruct\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T17:25:12.047837](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__HelpSteer-filtered-Solar-Instruct/blob/main/results_2024-01-15T17-25-12.047837.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6427219840393912,\n \"acc_stderr\": 0.0319380448227668,\n \"acc_norm\": 0.6461776033755803,\n \"acc_norm_stderr\": 0.032576281401967964,\n \"mc1\": 0.33047735618115054,\n \"mc1_stderr\": 0.0164667696136983,\n \"mc2\": 0.46234588692773587,\n \"mc2_stderr\": 0.015244437978733195\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5878839590443686,\n \"acc_stderr\": 0.014383915302225407,\n \"acc_norm\": 0.6313993174061433,\n \"acc_norm_stderr\": 0.0140978106780422\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6395140410276837,\n \"acc_stderr\": 0.0047916019756127646,\n \"acc_norm\": 0.8305118502290381,\n \"acc_norm_stderr\": 0.0037441574425365583\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.03738520676119668,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.03738520676119668\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.028727502957880274,\n \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.028727502957880274\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.04576665403207763,\n \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.04576665403207763\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5872340425531914,\n \"acc_stderr\": 0.03218471141400351,\n \"acc_norm\": 0.5872340425531914,\n \"acc_norm_stderr\": 0.03218471141400351\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.046774730044912,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.046774730044912\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6137931034482759,\n \"acc_stderr\": 0.04057324734419035,\n \"acc_norm\": 0.6137931034482759,\n \"acc_norm_stderr\": 0.04057324734419035\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42592592592592593,\n \"acc_stderr\": 0.025467149045469536,\n \"acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.025467149045469536\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.04403438954768177,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.04403438954768177\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723295,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723295\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.43349753694581283,\n \"acc_stderr\": 0.03486731727419872,\n \"acc_norm\": 0.43349753694581283,\n \"acc_norm_stderr\": 0.03486731727419872\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.047258156262526094,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.047258156262526094\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.031922715695482995,\n \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.031922715695482995\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8535353535353535,\n \"acc_stderr\": 0.025190921114603915,\n \"acc_norm\": 0.8535353535353535,\n \"acc_norm_stderr\": 0.025190921114603915\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644234,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644234\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.02925290592725198,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.02925290592725198\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977934,\n \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977934\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8440366972477065,\n \"acc_stderr\": 0.015555802713590175,\n \"acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.015555802713590175\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.03392238405321617,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.03392238405321617\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.0245098039215686,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.0245098039215686\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8270042194092827,\n \"acc_stderr\": 0.02462156286676842,\n \"acc_norm\": 0.8270042194092827,\n \"acc_norm_stderr\": 0.02462156286676842\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7130044843049327,\n \"acc_stderr\": 0.030360379710291936,\n \"acc_norm\": 0.7130044843049327,\n \"acc_norm_stderr\": 0.030360379710291936\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.03880848301082396,\n \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.03880848301082396\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.01389086216287617,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.01389086216287617\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.023786203255508287,\n \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.023786203255508287\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.21452513966480447,\n \"acc_stderr\": 0.013728923407828829,\n \"acc_norm\": 0.21452513966480447,\n \"acc_norm_stderr\": 0.013728923407828829\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.02495418432487991,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.02495418432487991\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.025583062489984813,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.025583062489984813\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.023468429832451145,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.023468429832451145\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.02982074719142244,\n \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.02982074719142244\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.485006518904824,\n \"acc_stderr\": 0.012764493202193253,\n \"acc_norm\": 0.485006518904824,\n \"acc_norm_stderr\": 0.012764493202193253\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7169117647058824,\n \"acc_stderr\": 0.027365861131513812,\n \"acc_norm\": 0.7169117647058824,\n \"acc_norm_stderr\": 0.027365861131513812\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.019070985589687492,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.019070985589687492\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.02954774168764004,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.02954774168764004\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.33047735618115054,\n \"mc1_stderr\": 0.0164667696136983,\n \"mc2\": 0.46234588692773587,\n \"mc2_stderr\": 0.015244437978733195\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8058405682715075,\n \"acc_stderr\": 0.011116983392392664\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.510235026535254,\n \"acc_stderr\": 0.013769598923012384\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/HelpSteer-filtered-Solar-Instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|arc:challenge|25_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|gsm8k|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hellaswag|10_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T17-25-12.047837.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["**/details_harness|winogrande|5_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T17-25-12.047837.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T17_25_12.047837", "path": ["results_2024-01-15T17-25-12.047837.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T17-25-12.047837.parquet"]}]}]} | 2024-01-15T17:27:50+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-Solar-Instruct
Dataset automatically created during the evaluation run of model Weyaxi/HelpSteer-filtered-Solar-Instruct on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T17:25:12.047837(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-Solar-Instruct\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/HelpSteer-filtered-Solar-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T17:25:12.047837(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-Solar-Instruct\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/HelpSteer-filtered-Solar-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T17:25:12.047837(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
195,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/HelpSteer-filtered-Solar-Instruct\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/HelpSteer-filtered-Solar-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T17:25:12.047837(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]"
] |
bf17891d47a4e622d63f3e068877fc644809f299 |
# Dataset Card for Evaluation run of sethuiyer/MedleyMD
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [sethuiyer/MedleyMD](https://huggingface.co/sethuiyer/MedleyMD) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_sethuiyer__MedleyMD",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T17:31:54.252170](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__MedleyMD/blob/main/results_2024-01-15T17-31-54.252170.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6540922017849786,
"acc_stderr": 0.03190059639989504,
"acc_norm": 0.6547986122363706,
"acc_norm_stderr": 0.032550018643590924,
"mc1": 0.3635250917992656,
"mc1_stderr": 0.016838862883965834,
"mc2": 0.5246301216405119,
"mc2_stderr": 0.015222279804788318
},
"harness|arc:challenge|25": {
"acc": 0.6279863481228669,
"acc_stderr": 0.014124597881844461,
"acc_norm": 0.6646757679180887,
"acc_norm_stderr": 0.013796182947785562
},
"harness|hellaswag|10": {
"acc": 0.6711810396335391,
"acc_stderr": 0.004688239419302074,
"acc_norm": 0.8605855407289384,
"acc_norm_stderr": 0.003456706038054756
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.0421850621536888,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.0421850621536888
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.720754716981132,
"acc_stderr": 0.027611163402399715,
"acc_norm": 0.720754716981132,
"acc_norm_stderr": 0.027611163402399715
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.037161774375660185,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.037161774375660185
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.036146654241808254,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.036146654241808254
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6042553191489362,
"acc_stderr": 0.031967586978353627,
"acc_norm": 0.6042553191489362,
"acc_norm_stderr": 0.031967586978353627
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5350877192982456,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.5350877192982456,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.0253795249107784,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.0253795249107784
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8032258064516129,
"acc_stderr": 0.022616409420742025,
"acc_norm": 0.8032258064516129,
"acc_norm_stderr": 0.022616409420742025
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.032876667586034906,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.032876667586034906
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.02962022787479049,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.02962022787479049
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37777777777777777,
"acc_stderr": 0.029560707392465715,
"acc_norm": 0.37777777777777777,
"acc_norm_stderr": 0.029560707392465715
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8513761467889909,
"acc_stderr": 0.015251253773660834,
"acc_norm": 0.8513761467889909,
"acc_norm_stderr": 0.015251253773660834
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8227848101265823,
"acc_stderr": 0.024856364184503217,
"acc_norm": 0.8227848101265823,
"acc_norm_stderr": 0.024856364184503217
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.030769352008229136,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.030769352008229136
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624734,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624734
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8264462809917356,
"acc_stderr": 0.03457272836917671,
"acc_norm": 0.8264462809917356,
"acc_norm_stderr": 0.03457272836917671
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092375,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092375
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993457,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993457
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7514450867052023,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.7514450867052023,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.40893854748603353,
"acc_stderr": 0.016442830654715544,
"acc_norm": 0.40893854748603353,
"acc_norm_stderr": 0.016442830654715544
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.02495418432487991,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.02495418432487991
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7438271604938271,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.7438271604938271,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4771838331160365,
"acc_stderr": 0.0127569333828237,
"acc_norm": 0.4771838331160365,
"acc_norm_stderr": 0.0127569333828237
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462927,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462927
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.018950886770806315,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.018950886770806315
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128438,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128438
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.025196929874827072,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.025196929874827072
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.03379976689896309,
"acc_norm": 0.87,
"acc_norm_stderr": 0.03379976689896309
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8596491228070176,
"acc_stderr": 0.026640582539133196,
"acc_norm": 0.8596491228070176,
"acc_norm_stderr": 0.026640582539133196
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3635250917992656,
"mc1_stderr": 0.016838862883965834,
"mc2": 0.5246301216405119,
"mc2_stderr": 0.015222279804788318
},
"harness|winogrande|5": {
"acc": 0.8026835043409629,
"acc_stderr": 0.011185026389050372
},
"harness|gsm8k|5": {
"acc": 0.6899166034874905,
"acc_stderr": 0.01274030571737627
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_sethuiyer__MedleyMD | [
"region:us"
] | 2024-01-15T17:34:10+00:00 | {"pretty_name": "Evaluation run of sethuiyer/MedleyMD", "dataset_summary": "Dataset automatically created during the evaluation run of model [sethuiyer/MedleyMD](https://huggingface.co/sethuiyer/MedleyMD) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sethuiyer__MedleyMD\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T17:31:54.252170](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__MedleyMD/blob/main/results_2024-01-15T17-31-54.252170.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6540922017849786,\n \"acc_stderr\": 0.03190059639989504,\n \"acc_norm\": 0.6547986122363706,\n \"acc_norm_stderr\": 0.032550018643590924,\n \"mc1\": 0.3635250917992656,\n \"mc1_stderr\": 0.016838862883965834,\n \"mc2\": 0.5246301216405119,\n \"mc2_stderr\": 0.015222279804788318\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6279863481228669,\n \"acc_stderr\": 0.014124597881844461,\n \"acc_norm\": 0.6646757679180887,\n \"acc_norm_stderr\": 0.013796182947785562\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6711810396335391,\n \"acc_stderr\": 0.004688239419302074,\n \"acc_norm\": 0.8605855407289384,\n \"acc_norm_stderr\": 0.003456706038054756\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.0421850621536888,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.0421850621536888\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n \"acc_stderr\": 0.037161774375660185,\n \"acc_norm\": 0.7291666666666666,\n \"acc_norm_stderr\": 0.037161774375660185\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.036146654241808254,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.036146654241808254\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6042553191489362,\n \"acc_stderr\": 0.031967586978353627,\n \"acc_norm\": 0.6042553191489362,\n \"acc_norm_stderr\": 0.031967586978353627\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5350877192982456,\n \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.5350877192982456,\n \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41534391534391535,\n \"acc_stderr\": 0.0253795249107784,\n \"acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.0253795249107784\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8032258064516129,\n \"acc_stderr\": 0.022616409420742025,\n \"acc_norm\": 0.8032258064516129,\n \"acc_norm_stderr\": 0.022616409420742025\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.032876667586034906,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.032876667586034906\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.02962022787479049,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.02962022787479049\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37777777777777777,\n \"acc_stderr\": 0.029560707392465715,\n \"acc_norm\": 0.37777777777777777,\n \"acc_norm_stderr\": 0.029560707392465715\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8513761467889909,\n \"acc_stderr\": 0.015251253773660834,\n \"acc_norm\": 0.8513761467889909,\n \"acc_norm_stderr\": 0.015251253773660834\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5138888888888888,\n \"acc_stderr\": 0.03408655867977749,\n \"acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.03408655867977749\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8227848101265823,\n \"acc_stderr\": 0.024856364184503217,\n \"acc_norm\": 0.8227848101265823,\n \"acc_norm_stderr\": 0.024856364184503217\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.030769352008229136,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.030769352008229136\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624734,\n \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624734\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8264462809917356,\n \"acc_stderr\": 0.03457272836917671,\n \"acc_norm\": 0.8264462809917356,\n \"acc_norm_stderr\": 0.03457272836917671\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n \"acc_stderr\": 0.013625556907993457,\n \"acc_norm\": 0.8237547892720306,\n \"acc_norm_stderr\": 0.013625556907993457\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7514450867052023,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.7514450867052023,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.40893854748603353,\n \"acc_stderr\": 0.016442830654715544,\n \"acc_norm\": 0.40893854748603353,\n \"acc_norm_stderr\": 0.016442830654715544\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.02495418432487991,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.02495418432487991\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.024288533637726095,\n \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.024288533637726095\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4771838331160365,\n \"acc_stderr\": 0.0127569333828237,\n \"acc_norm\": 0.4771838331160365,\n \"acc_norm_stderr\": 0.0127569333828237\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462927,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462927\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6748366013071896,\n \"acc_stderr\": 0.018950886770806315,\n \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.018950886770806315\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128438,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128438\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.025196929874827072,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.025196929874827072\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.03379976689896309,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.03379976689896309\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8596491228070176,\n \"acc_stderr\": 0.026640582539133196,\n \"acc_norm\": 0.8596491228070176,\n \"acc_norm_stderr\": 0.026640582539133196\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3635250917992656,\n \"mc1_stderr\": 0.016838862883965834,\n \"mc2\": 0.5246301216405119,\n \"mc2_stderr\": 0.015222279804788318\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8026835043409629,\n \"acc_stderr\": 0.011185026389050372\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6899166034874905,\n \"acc_stderr\": 0.01274030571737627\n }\n}\n```", "repo_url": "https://huggingface.co/sethuiyer/MedleyMD", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|arc:challenge|25_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|gsm8k|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hellaswag|10_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T17-31-54.252170.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["**/details_harness|winogrande|5_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T17-31-54.252170.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T17_31_54.252170", "path": ["results_2024-01-15T17-31-54.252170.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T17-31-54.252170.parquet"]}]}]} | 2024-01-15T17:34:35+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of sethuiyer/MedleyMD
Dataset automatically created during the evaluation run of model sethuiyer/MedleyMD on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T17:31:54.252170(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of sethuiyer/MedleyMD\n\n\n\nDataset automatically created during the evaluation run of model sethuiyer/MedleyMD on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T17:31:54.252170(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of sethuiyer/MedleyMD\n\n\n\nDataset automatically created during the evaluation run of model sethuiyer/MedleyMD on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T17:31:54.252170(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
175,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of sethuiyer/MedleyMD\n\n\n\nDataset automatically created during the evaluation run of model sethuiyer/MedleyMD on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T17:31:54.252170(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
e2ae5382069e8e3684db4958e303cf88ffcf2d2e |
# Dataset Card for Evaluation run of mlabonne/Beagle14-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mlabonne__Beagle14-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T17:34:40.781222](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Beagle14-7B/blob/main/results_2024-01-15T17-34-40.781222.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6528805890883588,
"acc_stderr": 0.032078173732397165,
"acc_norm": 0.6523758548450951,
"acc_norm_stderr": 0.032744687519676414,
"mc1": 0.5556915544675642,
"mc1_stderr": 0.01739458625074318,
"mc2": 0.6887873400281794,
"mc2_stderr": 0.015076928943972381
},
"harness|arc:challenge|25": {
"acc": 0.697098976109215,
"acc_stderr": 0.013428241573185349,
"acc_norm": 0.7295221843003413,
"acc_norm_stderr": 0.012980954547659556
},
"harness|hellaswag|10": {
"acc": 0.7069308902609042,
"acc_stderr": 0.004542396269999213,
"acc_norm": 0.8795060744871539,
"acc_norm_stderr": 0.003248729221152882
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047423976,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047423976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569525,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569525
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.0358687928008034,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.0358687928008034
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6994219653179191,
"acc_stderr": 0.0349610148119118,
"acc_norm": 0.6994219653179191,
"acc_norm_stderr": 0.0349610148119118
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108102,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108102
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.02533120243894443,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.02533120243894443
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267042,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267042
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919436,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919436
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6692307692307692,
"acc_stderr": 0.023854795680971125,
"acc_norm": 0.6692307692307692,
"acc_norm_stderr": 0.023854795680971125
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.02866120111652456,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.02866120111652456
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.03086868260412162,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.03086868260412162
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8532110091743119,
"acc_stderr": 0.015173141845126243,
"acc_norm": 0.8532110091743119,
"acc_norm_stderr": 0.015173141845126243
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5324074074074074,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.5324074074074074,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931792,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931792
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7932489451476793,
"acc_stderr": 0.0263616516683891,
"acc_norm": 0.7932489451476793,
"acc_norm_stderr": 0.0263616516683891
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.0349814938546247,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.0349814938546247
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165612,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165612
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903336,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903336
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069356,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069356
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.44581005586592176,
"acc_stderr": 0.01662399851333311,
"acc_norm": 0.44581005586592176,
"acc_norm_stderr": 0.01662399851333311
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.025829163272757482,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.025829163272757482
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47327249022164275,
"acc_stderr": 0.012751977967676008,
"acc_norm": 0.47327249022164275,
"acc_norm_stderr": 0.012751977967676008
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6727941176470589,
"acc_stderr": 0.028501452860396553,
"acc_norm": 0.6727941176470589,
"acc_norm_stderr": 0.028501452860396553
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.019070985589687495,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.019070985589687495
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.746938775510204,
"acc_stderr": 0.027833023871399673,
"acc_norm": 0.746938775510204,
"acc_norm_stderr": 0.027833023871399673
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685516,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685516
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5556915544675642,
"mc1_stderr": 0.01739458625074318,
"mc2": 0.6887873400281794,
"mc2_stderr": 0.015076928943972381
},
"harness|winogrande|5": {
"acc": 0.8263614838200474,
"acc_stderr": 0.010646116480330996
},
"harness|gsm8k|5": {
"acc": 0.714177407126611,
"acc_stderr": 0.012444963460615634
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_mlabonne__Beagle14-7B | [
"region:us"
] | 2024-01-15T17:37:08+00:00 | {"pretty_name": "Evaluation run of mlabonne/Beagle14-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mlabonne__Beagle14-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T17:34:40.781222](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Beagle14-7B/blob/main/results_2024-01-15T17-34-40.781222.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6528805890883588,\n \"acc_stderr\": 0.032078173732397165,\n \"acc_norm\": 0.6523758548450951,\n \"acc_norm_stderr\": 0.032744687519676414,\n \"mc1\": 0.5556915544675642,\n \"mc1_stderr\": 0.01739458625074318,\n \"mc2\": 0.6887873400281794,\n \"mc2_stderr\": 0.015076928943972381\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.697098976109215,\n \"acc_stderr\": 0.013428241573185349,\n \"acc_norm\": 0.7295221843003413,\n \"acc_norm_stderr\": 0.012980954547659556\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7069308902609042,\n \"acc_stderr\": 0.004542396269999213,\n \"acc_norm\": 0.8795060744871539,\n \"acc_norm_stderr\": 0.003248729221152882\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.041539484047423976,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.041539484047423976\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6994219653179191,\n \"acc_stderr\": 0.0349610148119118,\n \"acc_norm\": 0.6994219653179191,\n \"acc_norm_stderr\": 0.0349610148119118\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108102,\n \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108102\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894443,\n \"acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894443\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356853,\n \"acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356853\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267042,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267042\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919436,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919436\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.023854795680971125,\n \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.023854795680971125\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.02866120111652456,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.02866120111652456\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.03086868260412162,\n \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.03086868260412162\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8532110091743119,\n \"acc_stderr\": 0.015173141845126243,\n \"acc_norm\": 0.8532110091743119,\n \"acc_norm_stderr\": 0.015173141845126243\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5324074074074074,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.5324074074074074,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7932489451476793,\n \"acc_stderr\": 0.0263616516683891,\n \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.0263616516683891\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.0349814938546247,\n \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.0349814938546247\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165612,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165612\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n \"acc_stderr\": 0.013586619219903336,\n \"acc_norm\": 0.8250319284802043,\n \"acc_norm_stderr\": 0.013586619219903336\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069356,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069356\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.44581005586592176,\n \"acc_stderr\": 0.01662399851333311,\n \"acc_norm\": 0.44581005586592176,\n \"acc_norm_stderr\": 0.01662399851333311\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.025829163272757482,\n \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.025829163272757482\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47327249022164275,\n \"acc_stderr\": 0.012751977967676008,\n \"acc_norm\": 0.47327249022164275,\n \"acc_norm_stderr\": 0.012751977967676008\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396553,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396553\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.019070985589687495,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.019070985589687495\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.746938775510204,\n \"acc_stderr\": 0.027833023871399673,\n \"acc_norm\": 0.746938775510204,\n \"acc_norm_stderr\": 0.027833023871399673\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n \"acc_stderr\": 0.03858158940685516,\n \"acc_norm\": 0.5662650602409639,\n \"acc_norm_stderr\": 0.03858158940685516\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5556915544675642,\n \"mc1_stderr\": 0.01739458625074318,\n \"mc2\": 0.6887873400281794,\n \"mc2_stderr\": 0.015076928943972381\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8263614838200474,\n \"acc_stderr\": 0.010646116480330996\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.714177407126611,\n \"acc_stderr\": 0.012444963460615634\n }\n}\n```", "repo_url": "https://huggingface.co/mlabonne/Beagle14-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|arc:challenge|25_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|gsm8k|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hellaswag|10_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T17-34-40.781222.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["**/details_harness|winogrande|5_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T17-34-40.781222.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T17_34_40.781222", "path": ["results_2024-01-15T17-34-40.781222.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T17-34-40.781222.parquet"]}]}]} | 2024-01-15T17:37:33+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of mlabonne/Beagle14-7B
Dataset automatically created during the evaluation run of model mlabonne/Beagle14-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T17:34:40.781222(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of mlabonne/Beagle14-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/Beagle14-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T17:34:40.781222(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mlabonne/Beagle14-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/Beagle14-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T17:34:40.781222(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
181,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mlabonne/Beagle14-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/Beagle14-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T17:34:40.781222(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
7846f80f6f412154c76325aec0de76134029b52a |
# Dataset Card for Evaluation run of BAAI/Aquila2-34B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [BAAI/Aquila2-34B](https://huggingface.co/BAAI/Aquila2-34B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BAAI__Aquila2-34B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T18:37:14.451844](https://huggingface.co/datasets/open-llm-leaderboard/details_BAAI__Aquila2-34B/blob/main/results_2024-01-15T18-37-14.451844.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7421090841218929,
"acc_stderr": 0.028617632191882958,
"acc_norm": 0.7572926151712773,
"acc_norm_stderr": 0.02933086673337662,
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608753,
"mc2": 0.40853761852658155,
"mc2_stderr": 0.014823421659209666
},
"harness|arc:challenge|25": {
"acc": 0.5273037542662116,
"acc_stderr": 0.014589589101985994,
"acc_norm": 0.5247440273037542,
"acc_norm_stderr": 0.01459348769493774
},
"harness|hellaswag|10": {
"acc": 0.643397729535949,
"acc_stderr": 0.004780169873332854,
"acc_norm": 0.8189603664608643,
"acc_norm_stderr": 0.0038426408003615128
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.725925925925926,
"acc_stderr": 0.03853254836552003,
"acc_norm": 0.725925925925926,
"acc_norm_stderr": 0.03853254836552003
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7828947368421053,
"acc_stderr": 0.033550453048829226,
"acc_norm": 0.7828947368421053,
"acc_norm_stderr": 0.033550453048829226
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8,
"acc_stderr": 0.024618298195866518,
"acc_norm": 0.8,
"acc_norm_stderr": 0.024618298195866518
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8402777777777778,
"acc_stderr": 0.030635578972093288,
"acc_norm": 0.8402777777777778,
"acc_norm_stderr": 0.030635578972093288
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.56,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.56,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7687861271676301,
"acc_stderr": 0.03214737302029469,
"acc_norm": 0.7687861271676301,
"acc_norm_stderr": 0.03214737302029469
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.88,
"acc_stderr": 0.032659863237109066,
"acc_norm": 0.88,
"acc_norm_stderr": 0.032659863237109066
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6893617021276596,
"acc_stderr": 0.03025123757921317,
"acc_norm": 0.6893617021276596,
"acc_norm_stderr": 0.03025123757921317
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5614035087719298,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.5614035087719298,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6896551724137931,
"acc_stderr": 0.03855289616378948,
"acc_norm": 0.6896551724137931,
"acc_norm_stderr": 0.03855289616378948
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.5740740740740741,
"acc_stderr": 0.02546714904546955,
"acc_norm": 0.5740740740740741,
"acc_norm_stderr": 0.02546714904546955
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8258064516129032,
"acc_stderr": 0.02157624818451457,
"acc_norm": 0.8258064516129032,
"acc_norm_stderr": 0.02157624818451457
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6600985221674877,
"acc_stderr": 0.033327690684107895,
"acc_norm": 0.6600985221674877,
"acc_norm_stderr": 0.033327690684107895
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8424242424242424,
"acc_stderr": 0.028450388805284332,
"acc_norm": 0.8424242424242424,
"acc_norm_stderr": 0.028450388805284332
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.898989898989899,
"acc_stderr": 0.021469735576055353,
"acc_norm": 0.898989898989899,
"acc_norm_stderr": 0.021469735576055353
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768738,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768738
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7846153846153846,
"acc_stderr": 0.020843034557462878,
"acc_norm": 0.7846153846153846,
"acc_norm_stderr": 0.020843034557462878
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.5333333333333333,
"acc_stderr": 0.030417716961717474,
"acc_norm": 0.5333333333333333,
"acc_norm_stderr": 0.030417716961717474
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7689075630252101,
"acc_stderr": 0.027381406927868886,
"acc_norm": 0.7689075630252101,
"acc_norm_stderr": 0.027381406927868886
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5695364238410596,
"acc_stderr": 0.04042809961395634,
"acc_norm": 0.5695364238410596,
"acc_norm_stderr": 0.04042809961395634
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9064220183486239,
"acc_stderr": 0.012486841824601967,
"acc_norm": 0.9064220183486239,
"acc_norm_stderr": 0.012486841824601967
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.031141447823536023,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.031141447823536023
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8823529411764706,
"acc_stderr": 0.022613286601132012,
"acc_norm": 0.8823529411764706,
"acc_norm_stderr": 0.022613286601132012
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8987341772151899,
"acc_stderr": 0.019637720526065508,
"acc_norm": 0.8987341772151899,
"acc_norm_stderr": 0.019637720526065508
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8385650224215246,
"acc_stderr": 0.02469395789912846,
"acc_norm": 0.8385650224215246,
"acc_norm_stderr": 0.02469395789912846
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8396946564885496,
"acc_stderr": 0.03217829420744631,
"acc_norm": 0.8396946564885496,
"acc_norm_stderr": 0.03217829420744631
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.9090909090909091,
"acc_stderr": 0.026243194054073885,
"acc_norm": 0.9090909090909091,
"acc_norm_stderr": 0.026243194054073885
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8518518518518519,
"acc_stderr": 0.03434300243631,
"acc_norm": 0.8518518518518519,
"acc_norm_stderr": 0.03434300243631
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.9079754601226994,
"acc_stderr": 0.02271074471568872,
"acc_norm": 0.9079754601226994,
"acc_norm_stderr": 0.02271074471568872
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.04742762361243011,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.04742762361243011
},
"harness|hendrycksTest-management|5": {
"acc": 0.8640776699029126,
"acc_stderr": 0.033932957297610096,
"acc_norm": 0.8640776699029126,
"acc_norm_stderr": 0.033932957297610096
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9316239316239316,
"acc_stderr": 0.016534627684311357,
"acc_norm": 0.9316239316239316,
"acc_norm_stderr": 0.016534627684311357
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909281,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909281
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8773946360153256,
"acc_stderr": 0.011728672144131565,
"acc_norm": 0.8773946360153256,
"acc_norm_stderr": 0.011728672144131565
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8208092485549133,
"acc_stderr": 0.020647590029679332,
"acc_norm": 0.8208092485549133,
"acc_norm_stderr": 0.020647590029679332
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6547486033519553,
"acc_stderr": 0.015901432608930358,
"acc_norm": 0.6547486033519553,
"acc_norm_stderr": 0.015901432608930358
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.0227337894054476,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.0227337894054476
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.842443729903537,
"acc_stderr": 0.020692237273583994,
"acc_norm": 0.842443729903537,
"acc_norm_stderr": 0.020692237273583994
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8364197530864198,
"acc_stderr": 0.020581466138257107,
"acc_norm": 0.8364197530864198,
"acc_norm_stderr": 0.020581466138257107
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.7092198581560284,
"acc_stderr": 0.027090664368353178,
"acc_norm": 0.7092198581560284,
"acc_norm_stderr": 0.027090664368353178
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.7073011734028684,
"acc_stderr": 0.011620949195849536,
"acc_norm": 0.7073011734028684,
"acc_norm_stderr": 0.011620949195849536
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8272058823529411,
"acc_stderr": 0.02296606758558178,
"acc_norm": 0.8272058823529411,
"acc_norm_stderr": 0.02296606758558178
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8218954248366013,
"acc_stderr": 0.015478369653108568,
"acc_norm": 0.8218954248366013,
"acc_norm_stderr": 0.015478369653108568
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.8090909090909091,
"acc_stderr": 0.03764425585984927,
"acc_norm": 0.8090909090909091,
"acc_norm_stderr": 0.03764425585984927
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8653061224489796,
"acc_stderr": 0.021855658840811615,
"acc_norm": 0.8653061224489796,
"acc_norm_stderr": 0.021855658840811615
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.9203980099502488,
"acc_stderr": 0.01913968563350382,
"acc_norm": 0.9203980099502488,
"acc_norm_stderr": 0.01913968563350382
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.93,
"acc_stderr": 0.025643239997624294,
"acc_norm": 0.93,
"acc_norm_stderr": 0.025643239997624294
},
"harness|hendrycksTest-virology|5": {
"acc": 0.7891566265060241,
"acc_stderr": 0.03175554786629919,
"acc_norm": 0.7891566265060241,
"acc_norm_stderr": 0.03175554786629919
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.9064327485380117,
"acc_stderr": 0.02233599323116327,
"acc_norm": 0.9064327485380117,
"acc_norm_stderr": 0.02233599323116327
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608753,
"mc2": 0.40853761852658155,
"mc2_stderr": 0.014823421659209666
},
"harness|winogrande|5": {
"acc": 0.755327545382794,
"acc_stderr": 0.012082125654159738
},
"harness|gsm8k|5": {
"acc": 0.006065200909780136,
"acc_stderr": 0.00213867030146047
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_BAAI__Aquila2-34B | [
"region:us"
] | 2024-01-15T18:29:44+00:00 | {"pretty_name": "Evaluation run of BAAI/Aquila2-34B", "dataset_summary": "Dataset automatically created during the evaluation run of model [BAAI/Aquila2-34B](https://huggingface.co/BAAI/Aquila2-34B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BAAI__Aquila2-34B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T18:37:14.451844](https://huggingface.co/datasets/open-llm-leaderboard/details_BAAI__Aquila2-34B/blob/main/results_2024-01-15T18-37-14.451844.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7421090841218929,\n \"acc_stderr\": 0.028617632191882958,\n \"acc_norm\": 0.7572926151712773,\n \"acc_norm_stderr\": 0.02933086673337662,\n \"mc1\": 0.2802937576499388,\n \"mc1_stderr\": 0.015723139524608753,\n \"mc2\": 0.40853761852658155,\n \"mc2_stderr\": 0.014823421659209666\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5273037542662116,\n \"acc_stderr\": 0.014589589101985994,\n \"acc_norm\": 0.5247440273037542,\n \"acc_norm_stderr\": 0.01459348769493774\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.643397729535949,\n \"acc_stderr\": 0.004780169873332854,\n \"acc_norm\": 0.8189603664608643,\n \"acc_norm_stderr\": 0.0038426408003615128\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.725925925925926,\n \"acc_stderr\": 0.03853254836552003,\n \"acc_norm\": 0.725925925925926,\n \"acc_norm_stderr\": 0.03853254836552003\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7828947368421053,\n \"acc_stderr\": 0.033550453048829226,\n \"acc_norm\": 0.7828947368421053,\n \"acc_norm_stderr\": 0.033550453048829226\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.024618298195866518,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.024618298195866518\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8402777777777778,\n \"acc_stderr\": 0.030635578972093288,\n \"acc_norm\": 0.8402777777777778,\n \"acc_norm_stderr\": 0.030635578972093288\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7687861271676301,\n \"acc_stderr\": 0.03214737302029469,\n \"acc_norm\": 0.7687861271676301,\n \"acc_norm_stderr\": 0.03214737302029469\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.032659863237109066\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6893617021276596,\n \"acc_stderr\": 0.03025123757921317,\n \"acc_norm\": 0.6893617021276596,\n \"acc_norm_stderr\": 0.03025123757921317\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5614035087719298,\n \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.5614035087719298,\n \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6896551724137931,\n \"acc_stderr\": 0.03855289616378948,\n \"acc_norm\": 0.6896551724137931,\n \"acc_norm_stderr\": 0.03855289616378948\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.5740740740740741,\n \"acc_stderr\": 0.02546714904546955,\n \"acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.02546714904546955\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8258064516129032,\n \"acc_stderr\": 0.02157624818451457,\n \"acc_norm\": 0.8258064516129032,\n \"acc_norm_stderr\": 0.02157624818451457\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6600985221674877,\n \"acc_stderr\": 0.033327690684107895,\n \"acc_norm\": 0.6600985221674877,\n \"acc_norm_stderr\": 0.033327690684107895\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8424242424242424,\n \"acc_stderr\": 0.028450388805284332,\n \"acc_norm\": 0.8424242424242424,\n \"acc_norm_stderr\": 0.028450388805284332\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.898989898989899,\n \"acc_stderr\": 0.021469735576055353,\n \"acc_norm\": 0.898989898989899,\n \"acc_norm_stderr\": 0.021469735576055353\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768738,\n \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768738\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7846153846153846,\n \"acc_stderr\": 0.020843034557462878,\n \"acc_norm\": 0.7846153846153846,\n \"acc_norm_stderr\": 0.020843034557462878\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.5333333333333333,\n \"acc_stderr\": 0.030417716961717474,\n \"acc_norm\": 0.5333333333333333,\n \"acc_norm_stderr\": 0.030417716961717474\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7689075630252101,\n \"acc_stderr\": 0.027381406927868886,\n \"acc_norm\": 0.7689075630252101,\n \"acc_norm_stderr\": 0.027381406927868886\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.5695364238410596,\n \"acc_stderr\": 0.04042809961395634,\n \"acc_norm\": 0.5695364238410596,\n \"acc_norm_stderr\": 0.04042809961395634\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9064220183486239,\n \"acc_stderr\": 0.012486841824601967,\n \"acc_norm\": 0.9064220183486239,\n \"acc_norm_stderr\": 0.012486841824601967\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.031141447823536023,\n \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.031141447823536023\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8823529411764706,\n \"acc_stderr\": 0.022613286601132012,\n \"acc_norm\": 0.8823529411764706,\n \"acc_norm_stderr\": 0.022613286601132012\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8987341772151899,\n \"acc_stderr\": 0.019637720526065508,\n \"acc_norm\": 0.8987341772151899,\n \"acc_norm_stderr\": 0.019637720526065508\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8385650224215246,\n \"acc_stderr\": 0.02469395789912846,\n \"acc_norm\": 0.8385650224215246,\n \"acc_norm_stderr\": 0.02469395789912846\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8396946564885496,\n \"acc_stderr\": 0.03217829420744631,\n \"acc_norm\": 0.8396946564885496,\n \"acc_norm_stderr\": 0.03217829420744631\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.9090909090909091,\n \"acc_stderr\": 0.026243194054073885,\n \"acc_norm\": 0.9090909090909091,\n \"acc_norm_stderr\": 0.026243194054073885\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8518518518518519,\n \"acc_stderr\": 0.03434300243631,\n \"acc_norm\": 0.8518518518518519,\n \"acc_norm_stderr\": 0.03434300243631\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.9079754601226994,\n \"acc_stderr\": 0.02271074471568872,\n \"acc_norm\": 0.9079754601226994,\n \"acc_norm_stderr\": 0.02271074471568872\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n \"acc_stderr\": 0.04742762361243011,\n \"acc_norm\": 0.5178571428571429,\n \"acc_norm_stderr\": 0.04742762361243011\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.033932957297610096,\n \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.033932957297610096\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9316239316239316,\n \"acc_stderr\": 0.016534627684311357,\n \"acc_norm\": 0.9316239316239316,\n \"acc_norm_stderr\": 0.016534627684311357\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909281,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909281\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8773946360153256,\n \"acc_stderr\": 0.011728672144131565,\n \"acc_norm\": 0.8773946360153256,\n \"acc_norm_stderr\": 0.011728672144131565\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8208092485549133,\n \"acc_stderr\": 0.020647590029679332,\n \"acc_norm\": 0.8208092485549133,\n \"acc_norm_stderr\": 0.020647590029679332\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6547486033519553,\n \"acc_stderr\": 0.015901432608930358,\n \"acc_norm\": 0.6547486033519553,\n \"acc_norm_stderr\": 0.015901432608930358\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.0227337894054476,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.0227337894054476\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.842443729903537,\n \"acc_stderr\": 0.020692237273583994,\n \"acc_norm\": 0.842443729903537,\n \"acc_norm_stderr\": 0.020692237273583994\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8364197530864198,\n \"acc_stderr\": 0.020581466138257107,\n \"acc_norm\": 0.8364197530864198,\n \"acc_norm_stderr\": 0.020581466138257107\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.7092198581560284,\n \"acc_stderr\": 0.027090664368353178,\n \"acc_norm\": 0.7092198581560284,\n \"acc_norm_stderr\": 0.027090664368353178\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.7073011734028684,\n \"acc_stderr\": 0.011620949195849536,\n \"acc_norm\": 0.7073011734028684,\n \"acc_norm_stderr\": 0.011620949195849536\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8272058823529411,\n \"acc_stderr\": 0.02296606758558178,\n \"acc_norm\": 0.8272058823529411,\n \"acc_norm_stderr\": 0.02296606758558178\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.8218954248366013,\n \"acc_stderr\": 0.015478369653108568,\n \"acc_norm\": 0.8218954248366013,\n \"acc_norm_stderr\": 0.015478369653108568\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.8090909090909091,\n \"acc_stderr\": 0.03764425585984927,\n \"acc_norm\": 0.8090909090909091,\n \"acc_norm_stderr\": 0.03764425585984927\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8653061224489796,\n \"acc_stderr\": 0.021855658840811615,\n \"acc_norm\": 0.8653061224489796,\n \"acc_norm_stderr\": 0.021855658840811615\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9203980099502488,\n \"acc_stderr\": 0.01913968563350382,\n \"acc_norm\": 0.9203980099502488,\n \"acc_norm_stderr\": 0.01913968563350382\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.93,\n \"acc_stderr\": 0.025643239997624294,\n \"acc_norm\": 0.93,\n \"acc_norm_stderr\": 0.025643239997624294\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.7891566265060241,\n \"acc_stderr\": 0.03175554786629919,\n \"acc_norm\": 0.7891566265060241,\n \"acc_norm_stderr\": 0.03175554786629919\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.9064327485380117,\n \"acc_stderr\": 0.02233599323116327,\n \"acc_norm\": 0.9064327485380117,\n \"acc_norm_stderr\": 0.02233599323116327\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2802937576499388,\n \"mc1_stderr\": 0.015723139524608753,\n \"mc2\": 0.40853761852658155,\n \"mc2_stderr\": 0.014823421659209666\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.755327545382794,\n \"acc_stderr\": 0.012082125654159738\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006065200909780136,\n \"acc_stderr\": 0.00213867030146047\n }\n}\n```", "repo_url": "https://huggingface.co/BAAI/Aquila2-34B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|arc:challenge|25_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|arc:challenge|25_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|gsm8k|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|gsm8k|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hellaswag|10_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hellaswag|10_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T18-27-33.218553.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T18-37-14.451844.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["**/details_harness|winogrande|5_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["**/details_harness|winogrande|5_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T18-37-14.451844.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T18_27_33.218553", "path": ["results_2024-01-15T18-27-33.218553.parquet"]}, {"split": "2024_01_15T18_37_14.451844", "path": ["results_2024-01-15T18-37-14.451844.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T18-37-14.451844.parquet"]}]}]} | 2024-01-15T18:39:48+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of BAAI/Aquila2-34B
Dataset automatically created during the evaluation run of model BAAI/Aquila2-34B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T18:37:14.451844(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of BAAI/Aquila2-34B\n\n\n\nDataset automatically created during the evaluation run of model BAAI/Aquila2-34B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T18:37:14.451844(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of BAAI/Aquila2-34B\n\n\n\nDataset automatically created during the evaluation run of model BAAI/Aquila2-34B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T18:37:14.451844(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
175,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of BAAI/Aquila2-34B\n\n\n\nDataset automatically created during the evaluation run of model BAAI/Aquila2-34B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T18:37:14.451844(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
363c82de258bbc0003da1004cfd83357c82dc3e6 |
# Dataset Card for Evaluation run of amu/zen_moe
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [amu/zen_moe](https://huggingface.co/amu/zen_moe) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_amu__zen_moe",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T18:30:04.924661](https://huggingface.co/datasets/open-llm-leaderboard/details_amu__zen_moe/blob/main/results_2024-01-15T18-30-04.924661.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6498949124842508,
"acc_stderr": 0.03195908927814425,
"acc_norm": 0.6508188568135655,
"acc_norm_stderr": 0.03260982117495184,
"mc1": 0.33659730722154224,
"mc1_stderr": 0.01654241280949489,
"mc2": 0.5002751167484623,
"mc2_stderr": 0.015275798061349592
},
"harness|arc:challenge|25": {
"acc": 0.6015358361774744,
"acc_stderr": 0.014306946052735562,
"acc_norm": 0.6382252559726962,
"acc_norm_stderr": 0.014041957945038076
},
"harness|hellaswag|10": {
"acc": 0.6683927504481179,
"acc_stderr": 0.004698285350019217,
"acc_norm": 0.8505277833100976,
"acc_norm_stderr": 0.003558246300379053
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.028450154794118637,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.028450154794118637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.03586879280080341,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.03586879280080341
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5574468085106383,
"acc_stderr": 0.03246956919789958,
"acc_norm": 0.5574468085106383,
"acc_norm_stderr": 0.03246956919789958
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.047028804320496165,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.047028804320496165
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555497,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555497
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3783068783068783,
"acc_stderr": 0.024976954053155254,
"acc_norm": 0.3783068783068783,
"acc_norm_stderr": 0.024976954053155254
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8096774193548387,
"acc_stderr": 0.02233170761182307,
"acc_norm": 0.8096774193548387,
"acc_norm_stderr": 0.02233170761182307
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586815,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586815
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563973,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563973
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.029116617606083015,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.029116617606083015
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8495412844036697,
"acc_stderr": 0.015328563932669237,
"acc_norm": 0.8495412844036697,
"acc_norm_stderr": 0.015328563932669237
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.03400603625538272,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.03400603625538272
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.027325470966716312,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.027325470966716312
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.02574490253229092,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.02574490253229092
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098823,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098823
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.020930193185179333,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.020930193185179333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8173690932311622,
"acc_stderr": 0.013816335389973141,
"acc_norm": 0.8173690932311622,
"acc_norm_stderr": 0.013816335389973141
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.02425790170532338,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.02425790170532338
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38324022346368714,
"acc_stderr": 0.016260159604429128,
"acc_norm": 0.38324022346368714,
"acc_norm_stderr": 0.016260159604429128
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.0256468630971379,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.0256468630971379
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.02979071924382972,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.02979071924382972
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46740547588005216,
"acc_stderr": 0.012743072942653358,
"acc_norm": 0.46740547588005216,
"acc_norm_stderr": 0.012743072942653358
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7058823529411765,
"acc_stderr": 0.027678468642144714,
"acc_norm": 0.7058823529411765,
"acc_norm_stderr": 0.027678468642144714
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.01899970738316267,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.01899970738316267
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.029162738410249772,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.029162738410249772
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111837,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111837
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8654970760233918,
"acc_stderr": 0.0261682213446623,
"acc_norm": 0.8654970760233918,
"acc_norm_stderr": 0.0261682213446623
},
"harness|truthfulqa:mc|0": {
"mc1": 0.33659730722154224,
"mc1_stderr": 0.01654241280949489,
"mc2": 0.5002751167484623,
"mc2_stderr": 0.015275798061349592
},
"harness|winogrande|5": {
"acc": 0.8105761641673244,
"acc_stderr": 0.011012790432989247
},
"harness|gsm8k|5": {
"acc": 0.6535253980288097,
"acc_stderr": 0.013107179054313403
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_amu__zen_moe | [
"region:us"
] | 2024-01-15T18:32:22+00:00 | {"pretty_name": "Evaluation run of amu/zen_moe", "dataset_summary": "Dataset automatically created during the evaluation run of model [amu/zen_moe](https://huggingface.co/amu/zen_moe) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_amu__zen_moe\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T18:30:04.924661](https://huggingface.co/datasets/open-llm-leaderboard/details_amu__zen_moe/blob/main/results_2024-01-15T18-30-04.924661.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6498949124842508,\n \"acc_stderr\": 0.03195908927814425,\n \"acc_norm\": 0.6508188568135655,\n \"acc_norm_stderr\": 0.03260982117495184,\n \"mc1\": 0.33659730722154224,\n \"mc1_stderr\": 0.01654241280949489,\n \"mc2\": 0.5002751167484623,\n \"mc2_stderr\": 0.015275798061349592\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6015358361774744,\n \"acc_stderr\": 0.014306946052735562,\n \"acc_norm\": 0.6382252559726962,\n \"acc_norm_stderr\": 0.014041957945038076\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6683927504481179,\n \"acc_stderr\": 0.004698285350019217,\n \"acc_norm\": 0.8505277833100976,\n \"acc_norm_stderr\": 0.003558246300379053\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.03586879280080341,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.03586879280080341\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.03246956919789958,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.03246956919789958\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.047028804320496165,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.047028804320496165\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3783068783068783,\n \"acc_stderr\": 0.024976954053155254,\n \"acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.024976954053155254\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.49206349206349204,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.49206349206349204,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8096774193548387,\n \"acc_stderr\": 0.02233170761182307,\n \"acc_norm\": 0.8096774193548387,\n \"acc_norm_stderr\": 0.02233170761182307\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586815,\n \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586815\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563973,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563973\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35185185185185186,\n \"acc_stderr\": 0.029116617606083015,\n \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.029116617606083015\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669237,\n \"acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669237\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.03400603625538272,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.03400603625538272\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.027325470966716312,\n \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.027325470966716312\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8059071729957806,\n \"acc_stderr\": 0.02574490253229092,\n \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.02574490253229092\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098823,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098823\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n \"acc_stderr\": 0.013816335389973141,\n \"acc_norm\": 0.8173690932311622,\n \"acc_norm_stderr\": 0.013816335389973141\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.02425790170532338,\n \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.02425790170532338\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38324022346368714,\n \"acc_stderr\": 0.016260159604429128,\n \"acc_norm\": 0.38324022346368714,\n \"acc_norm_stderr\": 0.016260159604429128\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.0256468630971379,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.0256468630971379\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.475177304964539,\n \"acc_stderr\": 0.02979071924382972,\n \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.02979071924382972\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46740547588005216,\n \"acc_stderr\": 0.012743072942653358,\n \"acc_norm\": 0.46740547588005216,\n \"acc_norm_stderr\": 0.012743072942653358\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.027678468642144714,\n \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.027678468642144714\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6715686274509803,\n \"acc_stderr\": 0.01899970738316267,\n \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.01899970738316267\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.029162738410249772,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.029162738410249772\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8208955223880597,\n \"acc_stderr\": 0.027113286753111837,\n \"acc_norm\": 0.8208955223880597,\n \"acc_norm_stderr\": 0.027113286753111837\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8654970760233918,\n \"acc_stderr\": 0.0261682213446623,\n \"acc_norm\": 0.8654970760233918,\n \"acc_norm_stderr\": 0.0261682213446623\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.33659730722154224,\n \"mc1_stderr\": 0.01654241280949489,\n \"mc2\": 0.5002751167484623,\n \"mc2_stderr\": 0.015275798061349592\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8105761641673244,\n \"acc_stderr\": 0.011012790432989247\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6535253980288097,\n \"acc_stderr\": 0.013107179054313403\n }\n}\n```", "repo_url": "https://huggingface.co/amu/zen_moe", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|arc:challenge|25_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|gsm8k|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hellaswag|10_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T18-30-04.924661.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["**/details_harness|winogrande|5_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T18-30-04.924661.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T18_30_04.924661", "path": ["results_2024-01-15T18-30-04.924661.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T18-30-04.924661.parquet"]}]}]} | 2024-01-15T18:32:44+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of amu/zen_moe
Dataset automatically created during the evaluation run of model amu/zen_moe on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T18:30:04.924661(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of amu/zen_moe\n\n\n\nDataset automatically created during the evaluation run of model amu/zen_moe on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T18:30:04.924661(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of amu/zen_moe\n\n\n\nDataset automatically created during the evaluation run of model amu/zen_moe on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T18:30:04.924661(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
175,
69,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of amu/zen_moe\n\n\n\nDataset automatically created during the evaluation run of model amu/zen_moe on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T18:30:04.924661(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
8d33cb4aaa4986b549cfd27343ccd4bd6df1743b |
# Dataset of tamade_chiyu/珠手ちゆ (BanG Dream! Dai 2-ki)
This is the dataset of tamade_chiyu/珠手ちゆ (BanG Dream! Dai 2-ki), containing 119 images and their tags.
The core tags of this character are `long_hair, blue_eyes, bangs, red_hair, ahoge, animal_ears, headphones, fake_animal_ears, animal_ear_headphones, cat_ear_headphones, hair_between_eyes, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 119 | 154.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdreamdai2ki/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 119 | 90.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdreamdai2ki/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 275 | 189.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdreamdai2ki/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 119 | 137.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdreamdai2ki/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 275 | 269.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdreamdai2ki/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tamade_chiyu_bangdreamdai2ki',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 52 |  |  |  |  |  | white_shirt, 1girl, long_sleeves, solo, looking_at_viewer, red_necktie, striped_necktie, collared_shirt, blazer, school_uniform, blue_skirt, blush, plaid_skirt, black_jacket, pleated_skirt, v-shaped_eyebrows, smile, blue_jacket, white_background, open_jacket, open_mouth |
| 1 | 10 |  |  |  |  |  | 1girl, solo, long_sleeves, looking_at_viewer, black_gloves, fingerless_gloves, black_choker, black_shorts, collarbone, grin, open_clothes, short_shorts, teeth, white_shirt, black_jacket, blush, holding_microphone, pink_hair, sidelocks, standing, v-shaped_eyebrows |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | white_shirt | 1girl | long_sleeves | solo | looking_at_viewer | red_necktie | striped_necktie | collared_shirt | blazer | school_uniform | blue_skirt | blush | plaid_skirt | black_jacket | pleated_skirt | v-shaped_eyebrows | smile | blue_jacket | white_background | open_jacket | open_mouth | black_gloves | fingerless_gloves | black_choker | black_shorts | collarbone | grin | open_clothes | short_shorts | teeth | holding_microphone | pink_hair | sidelocks | standing |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------|:--------|:---------------|:-------|:--------------------|:--------------|:------------------|:-----------------|:---------|:-----------------|:-------------|:--------|:--------------|:---------------|:----------------|:--------------------|:--------|:--------------|:-------------------|:--------------|:-------------|:---------------|:--------------------|:---------------|:---------------|:-------------|:-------|:---------------|:---------------|:--------|:---------------------|:------------|:------------|:-----------|
| 0 | 52 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | | | | | | | X | | X | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/tamade_chiyu_bangdreamdai2ki | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T18:35:35+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T19:00:53+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of tamade\_chiyu/珠手ちゆ (BanG Dream! Dai 2-ki)
====================================================
This is the dataset of tamade\_chiyu/珠手ちゆ (BanG Dream! Dai 2-ki), containing 119 images and their tags.
The core tags of this character are 'long\_hair, blue\_eyes, bangs, red\_hair, ahoge, animal\_ears, headphones, fake\_animal\_ears, animal\_ear\_headphones, cat\_ear\_headphones, hair\_between\_eyes, very\_long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
c26e61c5b3d5503dce9dfb257320458925f6ad77 |
# Dataset of toyama_asuka (BanG Dream!)
This is the dataset of toyama_asuka (BanG Dream!), containing 12 images and their tags.
The core tags of this character are `brown_hair, purple_eyes, short_hair, bangs, hair_ornament, hairclip`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 12 | 11.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyama_asuka_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 12 | 6.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyama_asuka_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 28 | 13.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyama_asuka_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 12 | 10.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyama_asuka_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 28 | 18.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyama_asuka_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/toyama_asuka_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, solo, blush, looking_at_viewer, shirt, holding, closed_mouth, long_sleeves, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blush | looking_at_viewer | shirt | holding | closed_mouth | long_sleeves | smile |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|:--------|:----------|:---------------|:---------------|:--------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X |
| CyberHarem/toyama_asuka_bangdream | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T18:35:43+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T18:38:12+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of toyama\_asuka (BanG Dream!)
======================================
This is the dataset of toyama\_asuka (BanG Dream!), containing 12 images and their tags.
The core tags of this character are 'brown\_hair, purple\_eyes, short\_hair, bangs, hair\_ornament, hairclip', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
c872abc0ba31845368a38e3f991dc6174d6dc918 |
# Dataset of asahi_rokka/朝日六花 (BanG Dream! Dai 2-ki)
This is the dataset of asahi_rokka/朝日六花 (BanG Dream! Dai 2-ki), containing 87 images and their tags.
The core tags of this character are `green_eyes, long_hair, bangs, blue_hair, hair_between_eyes, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 87 | 114.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asahi_rokka_bangdreamdai2ki/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 87 | 68.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asahi_rokka_bangdreamdai2ki/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 197 | 137.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asahi_rokka_bangdreamdai2ki/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 87 | 100.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asahi_rokka_bangdreamdai2ki/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 197 | 190.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asahi_rokka_bangdreamdai2ki/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/asahi_rokka_bangdreamdai2ki',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, electric_guitar, holding_instrument, solo, looking_at_viewer, playing_instrument, black_shirt, blush, jewelry, standing, open_mouth, plectrum |
| 1 | 7 |  |  |  |  |  | 1girl, black-framed_eyewear, glasses, hair_over_shoulder, looking_at_viewer, solo, white_background, blush, hair_scrunchie, simple_background, red_scrunchie, standing, breasts, closed_mouth, collarbone, floral_print, low_ponytail, open_mouth, print_dress, shirt, smile |
| 2 | 11 |  |  |  |  |  | 1girl, solo, blazer, glasses, long_sleeves, school_uniform, black-framed_eyewear, collared_shirt, grey_jacket, looking_at_viewer, hair_over_shoulder, hair_scrunchie, white_shirt, blush, red_scrunchie, upper_body, green_necktie, plaid_skirt, pleated_skirt, star_(symbol), closed_mouth, diagonal-striped_necktie, diagonal_stripes, electric_guitar, eyewear_removed, green_skirt, holding_eyewear, holding_instrument, low_ponytail, simple_background, smile, socks, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | electric_guitar | holding_instrument | solo | looking_at_viewer | playing_instrument | black_shirt | blush | jewelry | standing | open_mouth | plectrum | black-framed_eyewear | glasses | hair_over_shoulder | white_background | hair_scrunchie | simple_background | red_scrunchie | breasts | closed_mouth | collarbone | floral_print | low_ponytail | print_dress | shirt | smile | blazer | long_sleeves | school_uniform | collared_shirt | grey_jacket | white_shirt | upper_body | green_necktie | plaid_skirt | pleated_skirt | star_(symbol) | diagonal-striped_necktie | diagonal_stripes | eyewear_removed | green_skirt | holding_eyewear | socks |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------------|:---------------------|:-------|:--------------------|:---------------------|:--------------|:--------|:----------|:-----------|:-------------|:-----------|:-----------------------|:----------|:---------------------|:-------------------|:-----------------|:--------------------|:----------------|:----------|:---------------|:-------------|:---------------|:---------------|:--------------|:--------|:--------|:---------|:---------------|:-----------------|:-----------------|:--------------|:--------------|:-------------|:----------------|:--------------|:----------------|:----------------|:---------------------------|:-------------------|:------------------|:--------------|:------------------|:--------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | | X | X | | | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 2 | 11 |  |  |  |  |  | X | X | X | X | X | | | X | | | | | X | X | X | X | X | X | X | | X | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/asahi_rokka_bangdreamdai2ki | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T18:35:47+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T18:56:48+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of asahi\_rokka/朝日六花 (BanG Dream! Dai 2-ki)
===================================================
This is the dataset of asahi\_rokka/朝日六花 (BanG Dream! Dai 2-ki), containing 87 images and their tags.
The core tags of this character are 'green\_eyes, long\_hair, bangs, blue\_hair, hair\_between\_eyes, hair\_ornament', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
b4907955a59abc65718d87d7d855a7b0e96265f5 |
# Dataset of wakana_rei/和奏レイ (BanG Dream! Dai 2-ki)
This is the dataset of wakana_rei/和奏レイ (BanG Dream! Dai 2-ki), containing 66 images and their tags.
The core tags of this character are `blue_eyes, long_hair, black_hair, brown_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 66 | 77.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 66 | 51.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 141 | 97.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 66 | 70.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 141 | 132.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/wakana_rei_bangdreamdai2ki',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, black_gloves, looking_at_viewer, ponytail, solo, black_serafuku, black_skirt, earrings, pleated_skirt, smile, black_choker, black_sailor_collar, midriff, navel, open_mouth, red_neckerchief, short_sleeves, suspenders, bandaged_arm, black_shirt, blush, chain, cowboy_shot, crop_top, guitar, half_gloves, holding_microphone, parted_bangs, piercing, red_ribbon, simple_background, standing, v-shaped_eyebrows, white_background |
| 1 | 8 |  |  |  |  |  | 1girl, smile, solo, belt, jeans, looking_at_viewer, simple_background, white_background, white_shirt, standing, blue_pants, closed_mouth, collarbone, full_body, holding, jacket, long_sleeves, necklace, torn_pants, vest |
| 2 | 5 |  |  |  |  |  | 1girl, earrings, solo, bare_shoulders, looking_at_viewer, short_hair, smile, black_gloves, flower, hair_ornament, red_dress, closed_mouth, feather_boa, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | looking_at_viewer | ponytail | solo | black_serafuku | black_skirt | earrings | pleated_skirt | smile | black_choker | black_sailor_collar | midriff | navel | open_mouth | red_neckerchief | short_sleeves | suspenders | bandaged_arm | black_shirt | blush | chain | cowboy_shot | crop_top | guitar | half_gloves | holding_microphone | parted_bangs | piercing | red_ribbon | simple_background | standing | v-shaped_eyebrows | white_background | belt | jeans | white_shirt | blue_pants | closed_mouth | collarbone | full_body | holding | jacket | long_sleeves | necklace | torn_pants | vest | bare_shoulders | short_hair | flower | hair_ornament | red_dress | feather_boa | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------------|:-----------|:-------|:-----------------|:--------------|:-----------|:----------------|:--------|:---------------|:----------------------|:----------|:--------|:-------------|:------------------|:----------------|:-------------|:---------------|:--------------|:--------|:--------|:--------------|:-----------|:---------|:--------------|:---------------------|:---------------|:-----------|:-------------|:--------------------|:-----------|:--------------------|:-------------------|:-------|:--------|:--------------|:-------------|:---------------|:-------------|:------------|:----------|:---------|:---------------|:-----------|:-------------|:-------|:-----------------|:-------------|:---------|:----------------|:------------|:--------------|:-------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X |
| CyberHarem/wakana_rei_bangdreamdai2ki | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T18:41:37+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T18:55:12+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of wakana\_rei/和奏レイ (BanG Dream! Dai 2-ki)
==================================================
This is the dataset of wakana\_rei/和奏レイ (BanG Dream! Dai 2-ki), containing 66 images and their tags.
The core tags of this character are 'blue\_eyes, long\_hair, black\_hair, brown\_hair, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
8cfea2d9dcc67bb980189396801aac3778222ae4 |
# Dataset Card for Evaluation run of CultriX/MergeTrix-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [CultriX/MergeTrix-7B](https://huggingface.co/CultriX/MergeTrix-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CultriX__MergeTrix-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T18:47:00.908192](https://huggingface.co/datasets/open-llm-leaderboard/details_CultriX__MergeTrix-7B/blob/main/results_2024-01-15T18-47-00.908192.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6547663148626479,
"acc_stderr": 0.032063309005698364,
"acc_norm": 0.6539809118606704,
"acc_norm_stderr": 0.03273532890766587,
"mc1": 0.5091799265605875,
"mc1_stderr": 0.017500550724819756,
"mc2": 0.6627204029237665,
"mc2_stderr": 0.015270623317149531
},
"harness|arc:challenge|25": {
"acc": 0.7005119453924915,
"acc_stderr": 0.013385021637313572,
"acc_norm": 0.7226962457337884,
"acc_norm_stderr": 0.013082095839059376
},
"harness|hellaswag|10": {
"acc": 0.709520015933081,
"acc_stderr": 0.004530560646902538,
"acc_norm": 0.8784106751643099,
"acc_norm_stderr": 0.003261429855277799
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7171052631578947,
"acc_stderr": 0.03665349695640767,
"acc_norm": 0.7171052631578947,
"acc_norm_stderr": 0.03665349695640767
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7169811320754716,
"acc_stderr": 0.027724236492700914,
"acc_norm": 0.7169811320754716,
"acc_norm_stderr": 0.027724236492700914
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555498,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555498
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411018,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411018
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.023025899617188723,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.023025899617188723
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.029620227874790482,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.029620227874790482
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.02874204090394848,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.02874204090394848
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931792,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931792
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.02595502084162113,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.02595502084162113
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406957,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406957
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8275862068965517,
"acc_stderr": 0.013507943909371802,
"acc_norm": 0.8275862068965517,
"acc_norm_stderr": 0.013507943909371802
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.023445826276545543,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.023445826276545543
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.44692737430167595,
"acc_stderr": 0.016628030039647614,
"acc_norm": 0.44692737430167595,
"acc_norm_stderr": 0.016628030039647614
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292452,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292452
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7561728395061729,
"acc_stderr": 0.023891879541959607,
"acc_norm": 0.7561728395061729,
"acc_norm_stderr": 0.023891879541959607
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4758800521512386,
"acc_stderr": 0.012755368722863935,
"acc_norm": 0.4758800521512386,
"acc_norm_stderr": 0.012755368722863935
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462923,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462923
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.018824219512706207,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.018824219512706207
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.02619392354445412,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.02619392354445412
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685516,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685516
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5091799265605875,
"mc1_stderr": 0.017500550724819756,
"mc2": 0.6627204029237665,
"mc2_stderr": 0.015270623317149531
},
"harness|winogrande|5": {
"acc": 0.835043409629045,
"acc_stderr": 0.01043091746823743
},
"harness|gsm8k|5": {
"acc": 0.7119029567854435,
"acc_stderr": 0.01247446973719792
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_CultriX__MergeTrix-7B | [
"region:us"
] | 2024-01-15T18:49:19+00:00 | {"pretty_name": "Evaluation run of CultriX/MergeTrix-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [CultriX/MergeTrix-7B](https://huggingface.co/CultriX/MergeTrix-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CultriX__MergeTrix-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T18:47:00.908192](https://huggingface.co/datasets/open-llm-leaderboard/details_CultriX__MergeTrix-7B/blob/main/results_2024-01-15T18-47-00.908192.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6547663148626479,\n \"acc_stderr\": 0.032063309005698364,\n \"acc_norm\": 0.6539809118606704,\n \"acc_norm_stderr\": 0.03273532890766587,\n \"mc1\": 0.5091799265605875,\n \"mc1_stderr\": 0.017500550724819756,\n \"mc2\": 0.6627204029237665,\n \"mc2_stderr\": 0.015270623317149531\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.7005119453924915,\n \"acc_stderr\": 0.013385021637313572,\n \"acc_norm\": 0.7226962457337884,\n \"acc_norm_stderr\": 0.013082095839059376\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.709520015933081,\n \"acc_stderr\": 0.004530560646902538,\n \"acc_norm\": 0.8784106751643099,\n \"acc_norm_stderr\": 0.003261429855277799\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7171052631578947,\n \"acc_stderr\": 0.03665349695640767,\n \"acc_norm\": 0.7171052631578947,\n \"acc_norm_stderr\": 0.03665349695640767\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7169811320754716,\n \"acc_stderr\": 0.027724236492700914,\n \"acc_norm\": 0.7169811320754716,\n \"acc_norm_stderr\": 0.027724236492700914\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555498,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555498\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41798941798941797,\n \"acc_stderr\": 0.025402555503260912,\n \"acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.025402555503260912\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411018,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411018\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n \"acc_stderr\": 0.023025899617188723,\n \"acc_norm\": 0.7935483870967742,\n \"acc_norm_stderr\": 0.023025899617188723\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.029620227874790482,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.029620227874790482\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.02874204090394848,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.02874204090394848\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8016877637130801,\n \"acc_stderr\": 0.02595502084162113,\n \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.02595502084162113\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406957,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406957\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8275862068965517,\n \"acc_stderr\": 0.013507943909371802,\n \"acc_norm\": 0.8275862068965517,\n \"acc_norm_stderr\": 0.013507943909371802\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.023445826276545543,\n \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.023445826276545543\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.44692737430167595,\n \"acc_stderr\": 0.016628030039647614,\n \"acc_norm\": 0.44692737430167595,\n \"acc_norm_stderr\": 0.016628030039647614\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292452,\n \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292452\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7561728395061729,\n \"acc_stderr\": 0.023891879541959607,\n \"acc_norm\": 0.7561728395061729,\n \"acc_norm_stderr\": 0.023891879541959607\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4758800521512386,\n \"acc_stderr\": 0.012755368722863935,\n \"acc_norm\": 0.4758800521512386,\n \"acc_norm_stderr\": 0.012755368722863935\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462923,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462923\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706207,\n \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706207\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.02619392354445412,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.02619392354445412\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n \"acc_stderr\": 0.03858158940685516,\n \"acc_norm\": 0.5662650602409639,\n \"acc_norm_stderr\": 0.03858158940685516\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5091799265605875,\n \"mc1_stderr\": 0.017500550724819756,\n \"mc2\": 0.6627204029237665,\n \"mc2_stderr\": 0.015270623317149531\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.835043409629045,\n \"acc_stderr\": 0.01043091746823743\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7119029567854435,\n \"acc_stderr\": 0.01247446973719792\n }\n}\n```", "repo_url": "https://huggingface.co/CultriX/MergeTrix-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|arc:challenge|25_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|gsm8k|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hellaswag|10_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T18-47-00.908192.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["**/details_harness|winogrande|5_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T18-47-00.908192.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T18_47_00.908192", "path": ["results_2024-01-15T18-47-00.908192.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T18-47-00.908192.parquet"]}]}]} | 2024-01-15T18:49:39+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of CultriX/MergeTrix-7B
Dataset automatically created during the evaluation run of model CultriX/MergeTrix-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T18:47:00.908192(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of CultriX/MergeTrix-7B\n\n\n\nDataset automatically created during the evaluation run of model CultriX/MergeTrix-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T18:47:00.908192(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CultriX/MergeTrix-7B\n\n\n\nDataset automatically created during the evaluation run of model CultriX/MergeTrix-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T18:47:00.908192(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
181,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CultriX/MergeTrix-7B\n\n\n\nDataset automatically created during the evaluation run of model CultriX/MergeTrix-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T18:47:00.908192(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
d529474da76d6bff69cfa4acf5a36fc802102061 |
# Dataset of satou_masuki/佐藤ますき (BanG Dream! Dai 2-ki)
This is the dataset of satou_masuki/佐藤ますき (BanG Dream! Dai 2-ki), containing 63 images and their tags.
The core tags of this character are `blonde_hair, short_hair, yellow_eyes, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 63 | 59.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satou_masuki_bangdreamdai2ki/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 63 | 45.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satou_masuki_bangdreamdai2ki/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 126 | 83.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satou_masuki_bangdreamdai2ki/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 63 | 56.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satou_masuki_bangdreamdai2ki/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 126 | 103.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satou_masuki_bangdreamdai2ki/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/satou_masuki_bangdreamdai2ki',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, crop_top, solo, midriff, looking_at_viewer, holding, navel, shirt, fingerless_gloves, earrings, open_jacket, simple_background, white_background, black_gloves, black_skirt, breasts, drumsticks, full_body, long_sleeves, red_jacket, smile |
| 1 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, solo, jacket, shirt, simple_background, upper_body, white_background, blush |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | crop_top | solo | midriff | looking_at_viewer | holding | navel | shirt | fingerless_gloves | earrings | open_jacket | simple_background | white_background | black_gloves | black_skirt | breasts | drumsticks | full_body | long_sleeves | red_jacket | smile | jacket | upper_body | blush |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:-------|:----------|:--------------------|:----------|:--------|:--------|:--------------------|:-----------|:--------------|:--------------------|:-------------------|:---------------|:--------------|:----------|:-------------|:------------|:---------------|:-------------|:--------|:---------|:-------------|:--------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | |
| 1 | 7 |  |  |  |  |  | X | | X | | X | | | X | | | | X | X | | | | | | | | | X | X | X |
| CyberHarem/satou_masuki_bangdreamdai2ki | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T18:56:16+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T19:11:23+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of satou\_masuki/佐藤ますき (BanG Dream! Dai 2-ki)
=====================================================
This is the dataset of satou\_masuki/佐藤ますき (BanG Dream! Dai 2-ki), containing 63 images and their tags.
The core tags of this character are 'blonde\_hair, short\_hair, yellow\_eyes, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
3f26e35ccf2fbaa08cf6b76ecd302c512d908c15 | # Dataset Card for "inep"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | loremipsum3658/inep | [
"region:us"
] | 2024-01-15T19:11:14+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "nup", "dtype": "string"}, {"name": "data", "dtype": "string"}, {"name": "titulo", "dtype": "string"}, {"name": "andamento", "dtype": "string"}, {"name": "classificacao_andamento", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 12920964, "num_examples": 1044}, {"name": "test", "num_bytes": 2930216, "num_examples": 224}, {"name": "validation", "num_bytes": 2824516, "num_examples": 224}], "download_size": 1990058, "dataset_size": 18675696}} | 2024-01-15T19:11:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "inep"
More Information needed | [
"# Dataset Card for \"inep\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"inep\"\n\nMore Information needed"
] | [
6,
12
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"inep\"\n\nMore Information needed"
] |
1bd9cd3c63f1049166cd9faad78e42762afed082 |
# Dataset of kisaragi_chihaya/如月千早/키사라기치하야 (THE iDOLM@STER)
This is the dataset of kisaragi_chihaya/如月千早/키사라기치하야 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `blue_hair, long_hair, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 502.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_chihaya_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 343.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_chihaya_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1128 | 677.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_chihaya_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 462.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_chihaya_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1128 | 874.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kisaragi_chihaya_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kisaragi_chihaya_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, solo, bare_shoulders, blush, twintails, looking_at_viewer, open_mouth, mini_top_hat, smile, white_gloves, hair_ribbon, striped_thighhighs, heart, white_dress |
| 1 | 16 |  |  |  |  |  | 1girl, solo, blush, smile, looking_at_viewer |
| 2 | 6 |  |  |  |  |  | 1girl, school_uniform, skirt, solo, smile, blazer, necktie, pantyhose |
| 3 | 11 |  |  |  |  |  | 1girl, belt, midriff, navel, solo, open_mouth, necklace, wrist_cuffs, skirt, smile, blush, hand_on_own_chest, cross |
| 4 | 19 |  |  |  |  |  | 1girl, shiny_hair, solo, blush, looking_at_viewer, bangs, upper_body, long_sleeves, very_long_hair, white_shirt, collared_shirt, dress_shirt, simple_background, white_background, closed_mouth, open_mouth, straight_hair, :d, floating_hair, wing_collar, collarbone |
| 5 | 13 |  |  |  |  |  | 1girl, dress, smile, solo, open_mouth, bare_shoulders, hair_ornament, elbow_gloves, flower, looking_at_viewer |
| 6 | 7 |  |  |  |  |  | 1girl, choker, smile, solo, hair_flower, skirt, blue_thighhighs, looking_at_viewer, mismatched_legwear, open_mouth, blush, microphone, white_background |
| 7 | 11 |  |  |  |  |  | 1girl, solo, blush, cat_ears, school_swimsuit, looking_at_viewer, tail, open_mouth, paw_gloves, cat_paws, simple_background, white_background, white_one-piece_swimsuit |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | bare_shoulders | blush | twintails | looking_at_viewer | open_mouth | mini_top_hat | smile | white_gloves | hair_ribbon | striped_thighhighs | heart | white_dress | school_uniform | skirt | blazer | necktie | pantyhose | belt | midriff | navel | necklace | wrist_cuffs | hand_on_own_chest | cross | shiny_hair | bangs | upper_body | long_sleeves | very_long_hair | white_shirt | collared_shirt | dress_shirt | simple_background | white_background | closed_mouth | straight_hair | :d | floating_hair | wing_collar | collarbone | dress | hair_ornament | elbow_gloves | flower | choker | hair_flower | blue_thighhighs | mismatched_legwear | microphone | cat_ears | school_swimsuit | tail | paw_gloves | cat_paws | white_one-piece_swimsuit |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:--------|:------------|:--------------------|:-------------|:---------------|:--------|:---------------|:--------------|:---------------------|:--------|:--------------|:-----------------|:--------|:---------|:----------|:------------|:-------|:----------|:--------|:-----------|:--------------|:--------------------|:--------|:-------------|:--------|:-------------|:---------------|:-----------------|:--------------|:-----------------|:--------------|:--------------------|:-------------------|:---------------|:----------------|:-----|:----------------|:--------------|:-------------|:--------|:----------------|:---------------|:---------|:---------|:--------------|:------------------|:---------------------|:-------------|:-----------|:------------------|:-------|:-------------|:-----------|:---------------------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 16 |  |  |  |  |  | X | X | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | | | | | | | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | X | | X | | | X | | X | | | | | | | X | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 19 |  |  |  |  |  | X | X | | X | | X | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 5 | 13 |  |  |  |  |  | X | X | X | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | | X | | X | X | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | | | | | | |
| 7 | 11 |  |  |  |  |  | X | X | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X |
| CyberHarem/kisaragi_chihaya_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T19:12:12+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T21:01:14+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of kisaragi\_chihaya/如月千早/키사라기치하야 (THE iDOLM@STER)
==========================================================
This is the dataset of kisaragi\_chihaya/如月千早/키사라기치하야 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'blue\_hair, long\_hair, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
037c427b835b11669b64b4f4653096052647555f |
# Dataset of amami_haruka/天海春香 (THE iDOLM@STER)
This is the dataset of amami_haruka/天海春香 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `brown_hair, short_hair, green_eyes, ribbon, hair_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 592.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amami_haruka_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 360.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amami_haruka_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1169 | 747.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amami_haruka_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 529.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amami_haruka_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1169 | 1.01 GiB | [Download](https://huggingface.co/datasets/CyberHarem/amami_haruka_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/amami_haruka_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, choker, open_mouth, smile, solo, blush, hair_flower, sweat, closed_eyes, microphone |
| 1 | 6 |  |  |  |  |  | 1girl, blush, choker, hair_flower, open_mouth, skirt, solo, thighhighs, :d, looking_at_viewer, microphone, mismatched_legwear |
| 2 | 6 |  |  |  |  |  | 1girl, open_mouth, smile, solo, hair_bow, dress |
| 3 | 9 |  |  |  |  |  | 1girl, one_eye_closed, smile, solo, open_mouth, ;d, skirt, star_(symbol), v |
| 4 | 6 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, white_background, bow, open_mouth, red_ribbon, simple_background, plaid_skirt, short_sleeves, :d, bangs, blue_shirt, pleated_skirt, school_uniform |
| 5 | 7 |  |  |  |  |  | 1girl, neck_ribbon, red_ribbon, simple_background, white_background, bangs, long_sleeves, looking_at_viewer, open_mouth, pleated_skirt, school_uniform, :d, blush, hair_bow, solo, sweater_vest, white_shirt, blue_skirt, red_bow, collared_shirt |
| 6 | 10 |  |  |  |  |  | 1girl, solo, bangs, blush, cleavage, looking_at_viewer, medium_breasts, navel, open_mouth, white_bikini, collarbone, day, outdoors, blue_sky, cloud, ocean, water, :d, cowboy_shot, frilled_bikini, jewelry, wet |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | choker | open_mouth | smile | solo | blush | hair_flower | sweat | closed_eyes | microphone | skirt | thighhighs | :d | looking_at_viewer | mismatched_legwear | hair_bow | dress | one_eye_closed | ;d | star_(symbol) | v | white_background | bow | red_ribbon | simple_background | plaid_skirt | short_sleeves | bangs | blue_shirt | pleated_skirt | school_uniform | neck_ribbon | long_sleeves | sweater_vest | white_shirt | blue_skirt | red_bow | collared_shirt | cleavage | medium_breasts | navel | white_bikini | collarbone | day | outdoors | blue_sky | cloud | ocean | water | cowboy_shot | frilled_bikini | jewelry | wet |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------------|:--------|:-------|:--------|:--------------|:--------|:--------------|:-------------|:--------|:-------------|:-----|:--------------------|:---------------------|:-----------|:--------|:-----------------|:-----|:----------------|:----|:-------------------|:------|:-------------|:--------------------|:--------------|:----------------|:--------|:-------------|:----------------|:-----------------|:--------------|:---------------|:---------------|:--------------|:-------------|:----------|:-----------------|:-----------|:-----------------|:--------|:---------------|:-------------|:------|:-----------|:-----------|:--------|:--------|:--------|:--------------|:-----------------|:----------|:------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | | X | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | X | X | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | | X | X | X | | | | | | X | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | X | | X | X | | | | | | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | X | | X | X | | | | | | | X | X | | X | | | | | | X | | X | X | | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | | X | | X | X | | | | | | | X | X | | | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/amami_haruka_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T19:12:20+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T21:04:03+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of amami\_haruka/天海春香 (THE iDOLM@STER)
==============================================
This is the dataset of amami\_haruka/天海春香 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, short\_hair, green\_eyes, ribbon, hair\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
c455f26bc8ff05b7da1720b2c0b0b561c458972f |
# Dataset of nyubara_reona/鳰原令王那 (BanG Dream! Dai 2-ki)
This is the dataset of nyubara_reona/鳰原令王那 (BanG Dream! Dai 2-ki), containing 50 images and their tags.
The core tags of this character are `multicolored_hair, bangs, long_hair, twintails, two-tone_hair, blunt_bangs, pink_hair, hair_ornament, blue_hair, sidelocks, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 50 | 72.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdreamdai2ki/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 50 | 38.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdreamdai2ki/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 119 | 85.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdreamdai2ki/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 50 | 63.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdreamdai2ki/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 119 | 125.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdreamdai2ki/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nyubara_reona_bangdreamdai2ki',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, hair_bobbles, solo, looking_at_viewer, upper_body, blush, long_sleeves, open_mouth, jewelry, purple_shirt, white_background, :d, heart, pink_eyes |
| 1 | 12 |  |  |  |  |  | long_sleeves, 1girl, solo, looking_at_viewer, hair_bobbles, open_mouth, pink_skirt, thighhighs, :d, blush, frilled_skirt, simple_background, very_long_hair, white_background, blue_shirt, collarbone, white_jacket, bracelet, full_body, open_jacket, purple_shirt, shoes, star_(symbol), upper_teeth_only |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hair_bobbles | solo | looking_at_viewer | upper_body | blush | long_sleeves | open_mouth | jewelry | purple_shirt | white_background | :d | heart | pink_eyes | pink_skirt | thighhighs | frilled_skirt | simple_background | very_long_hair | blue_shirt | collarbone | white_jacket | bracelet | full_body | open_jacket | shoes | star_(symbol) | upper_teeth_only |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:--------------------|:-------------|:--------|:---------------|:-------------|:----------|:---------------|:-------------------|:-----|:--------|:------------|:-------------|:-------------|:----------------|:--------------------|:-----------------|:-------------|:-------------|:---------------|:-----------|:------------|:--------------|:--------|:----------------|:-------------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | X | X | | X | X | X | | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/nyubara_reona_bangdreamdai2ki | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T19:12:34+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T19:22:51+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of nyubara\_reona/鳰原令王那 (BanG Dream! Dai 2-ki)
======================================================
This is the dataset of nyubara\_reona/鳰原令王那 (BanG Dream! Dai 2-ki), containing 50 images and their tags.
The core tags of this character are 'multicolored\_hair, bangs, long\_hair, twintails, two-tone\_hair, blunt\_bangs, pink\_hair, hair\_ornament, blue\_hair, sidelocks, red\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
0b20fc6b6997e84f3cd4e139278b8a2dce298891 |
# Dataset of kikuchi_makoto/菊地真/키쿠치마코토 (THE iDOLM@STER)
This is the dataset of kikuchi_makoto/菊地真/키쿠치마코토 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `short_hair, black_hair, antenna_hair, black_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 524.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuchi_makoto_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 348.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuchi_makoto_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1141 | 698.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuchi_makoto_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 481.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuchi_makoto_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1141 | 918.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuchi_makoto_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kikuchi_makoto_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, blush, hoodie, smile, solo, closed_eyes, jacket, open_mouth, pants |
| 1 | 6 |  |  |  |  |  | 1girl, solo, teddy_bear, purple_eyes, smile, looking_at_viewer, blush, hat, hoodie |
| 2 | 11 |  |  |  |  |  | 1girl, solo, purple_eyes, smile, arms_behind_back, sundress, white_dress |
| 3 | 18 |  |  |  |  |  | 1girl, solo, school_uniform, smile, brown_eyes, necktie, blush, open_mouth, brown_hair, plaid_skirt |
| 4 | 8 |  |  |  |  |  | 1girl, solo, sundress, day, open_mouth, cloud, flower, white_dress, looking_at_viewer, straw_hat, :d, bare_shoulders, blush, collarbone, hand_on_headwear, purple_eyes, sleeveless_dress, blue_sky, jewelry, ocean, outdoors, sun_hat |
| 5 | 5 |  |  |  |  |  | 1girl, bangs, collared_shirt, hair_between_eyes, looking_at_viewer, solo, white_shirt, short_sleeves, upper_body, blush, closed_mouth, simple_background, smile, striped_necktie, white_background, blue_necktie, school_uniform |
| 6 | 7 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, solo, sleeveless_dress, smile, bangs, blush, collarbone, hair_between_eyes, closed_mouth, small_breasts, upper_body, jewelry, purple_eyes, simple_background, white_background, white_dress |
| 7 | 9 |  |  |  |  |  | 1girl, solo, bare_shoulders, flower, smile, elbow_gloves, wedding_dress, bridal_veil, blue_eyes, hair_ornament, bouquet, bride, choker, necklace, open_mouth |
| 8 | 8 |  |  |  |  |  | 1girl, navel, smile, solo, midriff, belt, pants, armpits, blue_eyes, open_mouth |
| 9 | 7 |  |  |  |  |  | choker, hair_flower, open_mouth, 1girl, blue_thighhighs, mismatched_legwear, rose, solo, :d, red_flower, zettai_ryouiki, ;d, black_skirt, microphone, one_eye_closed |
| 10 | 14 |  |  |  |  |  | 1girl, navel, blush, nipples, open_mouth, 1boy, hetero, nude, smile, solo_focus, penis, cat_ears, cat_tail, censored, sex, small_breasts, sweat, vaginal, cat_paws, missionary, neck_bell, on_back, paw_gloves, pillow, purple_eyes, pussy |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | hoodie | smile | solo | closed_eyes | jacket | open_mouth | pants | teddy_bear | purple_eyes | looking_at_viewer | hat | arms_behind_back | sundress | white_dress | school_uniform | brown_eyes | necktie | brown_hair | plaid_skirt | day | cloud | flower | straw_hat | :d | bare_shoulders | collarbone | hand_on_headwear | sleeveless_dress | blue_sky | jewelry | ocean | outdoors | sun_hat | bangs | collared_shirt | hair_between_eyes | white_shirt | short_sleeves | upper_body | closed_mouth | simple_background | striped_necktie | white_background | blue_necktie | small_breasts | elbow_gloves | wedding_dress | bridal_veil | blue_eyes | hair_ornament | bouquet | bride | choker | necklace | navel | midriff | belt | armpits | hair_flower | blue_thighhighs | mismatched_legwear | rose | red_flower | zettai_ryouiki | ;d | black_skirt | microphone | one_eye_closed | nipples | 1boy | hetero | nude | solo_focus | penis | cat_ears | cat_tail | censored | sex | sweat | vaginal | cat_paws | missionary | neck_bell | on_back | paw_gloves | pillow | pussy |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------|:---------|:--------|:-------|:--------------|:---------|:-------------|:--------|:-------------|:--------------|:--------------------|:------|:-------------------|:-----------|:--------------|:-----------------|:-------------|:----------|:-------------|:--------------|:------|:--------|:---------|:------------|:-----|:-----------------|:-------------|:-------------------|:-------------------|:-----------|:----------|:--------|:-----------|:----------|:--------|:-----------------|:--------------------|:--------------|:----------------|:-------------|:---------------|:--------------------|:------------------|:-------------------|:---------------|:----------------|:---------------|:----------------|:--------------|:------------|:----------------|:----------|:--------|:---------|:-----------|:--------|:----------|:-------|:----------|:--------------|:------------------|:---------------------|:-------|:-------------|:-----------------|:-----|:--------------|:-------------|:-----------------|:----------|:-------|:---------|:-------|:-------------|:--------|:-----------|:-----------|:-----------|:------|:--------|:----------|:-----------|:-------------|:------------|:----------|:-------------|:---------|:--------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 11 |  |  |  |  |  | X | | | X | X | | | | | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 18 |  |  |  |  |  | X | X | | X | X | | | X | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | | | X | | | X | | | X | X | | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | X | X | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | | X | X | | | | | | X | X | | | | X | | | | | | | | | | | X | X | | X | | X | | | | X | | X | | | X | X | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | | X | X | | | X | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 8 |  |  |  |  |  | X | | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 7 |  |  |  |  |  | X | | | | X | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 10 | 14 |  |  |  |  |  | X | X | | X | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/kikuchi_makoto_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T19:28:31+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T21:08:58+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of kikuchi\_makoto/菊地真/키쿠치마코토 (THE iDOLM@STER)
======================================================
This is the dataset of kikuchi\_makoto/菊地真/키쿠치마코토 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'short\_hair, black\_hair, antenna\_hair, black\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
1c3c5a44ff87e02e81111c2022cb224d81e8a0ce |
# Dataset of hoshii_miki/星井美希/호시이미키 (THE iDOLM@STER)
This is the dataset of hoshii_miki/星井美希/호시이미키 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `blonde_hair, long_hair, green_eyes, ahoge, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 523.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshii_miki_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 352.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshii_miki_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1116 | 702.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshii_miki_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 484.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshii_miki_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1116 | 912.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshii_miki_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hoshii_miki_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, solo, midriff, navel, skirt, smile, open_mouth, thighhighs |
| 1 | 8 |  |  |  |  |  | 1girl, cleavage, medium_breasts, navel, single_leg_pantyhose, smile, solo, fishnet_pantyhose, midriff, belly_chain, jacket, open_mouth, pink_shorts, necklace, one_eye_closed, star_(symbol), yellow_bra |
| 2 | 13 |  |  |  |  |  | 1girl, open_mouth, smile, solo, blush, ;d, one_eye_closed, star_(symbol) |
| 3 | 10 |  |  |  |  |  | 1girl, smile, solo, blush, looking_at_viewer, simple_background, necklace, open_mouth, white_background |
| 4 | 5 |  |  |  |  |  | 1girl, flower, solo, elbow_gloves, smile, wedding_dress, boots, bridal_veil, hair_ornament, open_mouth, white_dress |
| 5 | 6 |  |  |  |  |  | 1girl, plaid_skirt, school_uniform, solo, smile, open_mouth, star_(symbol), blush, necktie |
| 6 | 7 |  |  |  |  |  | 1girl, cleavage, smile, solo, medium_breasts, open_mouth, day, navel, blush, looking_at_viewer, side-tie_bikini_bottom, sky, beach, cloud, green_bikini, outdoors, wet |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | midriff | navel | skirt | smile | open_mouth | thighhighs | cleavage | medium_breasts | single_leg_pantyhose | fishnet_pantyhose | belly_chain | jacket | pink_shorts | necklace | one_eye_closed | star_(symbol) | yellow_bra | blush | ;d | looking_at_viewer | simple_background | white_background | flower | elbow_gloves | wedding_dress | boots | bridal_veil | hair_ornament | white_dress | plaid_skirt | school_uniform | necktie | day | side-tie_bikini_bottom | sky | beach | cloud | green_bikini | outdoors | wet |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------|:--------|:--------|:--------|:-------------|:-------------|:-----------|:-----------------|:-----------------------|:--------------------|:--------------|:---------|:--------------|:-----------|:-----------------|:----------------|:-------------|:--------|:-----|:--------------------|:--------------------|:-------------------|:---------|:---------------|:----------------|:--------|:--------------|:----------------|:--------------|:--------------|:-----------------|:----------|:------|:-------------------------|:------|:--------|:--------|:---------------|:-----------|:------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | X | X | | | | X | X | | | | | | | | | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | X | | | | X | X | | | | | | | | | X | | | | X | | X | X | X | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | | | | X | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | | | | X | X | | | | | | | | | | | X | | X | | | | | | | | | | | | X | X | X | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | | X | | X | X | | X | X | | | | | | | | | | X | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
| CyberHarem/hoshii_miki_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T19:28:33+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T20:55:26+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of hoshii\_miki/星井美希/호시이미키 (THE iDOLM@STER)
===================================================
This is the dataset of hoshii\_miki/星井美希/호시이미키 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, long\_hair, green\_eyes, ahoge, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
7df9cf55cc73441c534cf72100ce2e9c5dcfe919 |
# Dataset of ganaha_hibiki/我那覇響 (THE iDOLM@STER)
This is the dataset of ganaha_hibiki/我那覇響 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `long_hair, black_hair, ponytail, blue_eyes, fang, earrings, antenna_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 507.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ganaha_hibiki_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 345.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ganaha_hibiki_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1167 | 695.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ganaha_hibiki_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 469.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ganaha_hibiki_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1167 | 893.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ganaha_hibiki_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ganaha_hibiki_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, animal, hamster, shorts, solo, open_mouth, sandals, bracelet, :d, hoop_earrings |
| 1 | 19 |  |  |  |  |  | 1girl, solo, open_mouth, hoop_earrings, smile, blush, bracelet, hair_ribbon |
| 2 | 16 |  |  |  |  |  | 1girl, open_mouth, solo, navel, bracelet, midriff, necklace, shorts, :d, belt |
| 3 | 8 |  |  |  |  |  | 1girl, dress, smile, solo, elbow_gloves, jewelry, bare_shoulders, open_mouth, ribbon |
| 4 | 10 |  |  |  |  |  | 1girl, smile, solo, cleavage, striped_bikini, high_ponytail, medium_breasts, open_mouth, hoop_earrings, looking_at_viewer, navel, water, barefoot, one_eye_closed |
| 5 | 9 |  |  |  |  |  | 1girl, hair_flower, smile, solo, kimono, open_mouth, new_year |
| 6 | 7 |  |  |  |  |  | 1girl, apron, open_mouth, maid_headdress, solo, blush, enmaided, smile, white_thighhighs |
| 7 | 6 |  |  |  |  |  | 1girl, bangs, blush, looking_at_viewer, solo, white_background, hair_between_eyes, hair_ribbon, short_shorts, simple_background, very_long_hair, collarbone, open_mouth, short_sleeves, :d, cleavage, medium_breasts, necklace, white_shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | animal | hamster | shorts | solo | open_mouth | sandals | bracelet | :d | hoop_earrings | smile | blush | hair_ribbon | navel | midriff | necklace | belt | dress | elbow_gloves | jewelry | bare_shoulders | ribbon | cleavage | striped_bikini | high_ponytail | medium_breasts | looking_at_viewer | water | barefoot | one_eye_closed | hair_flower | kimono | new_year | apron | maid_headdress | enmaided | white_thighhighs | bangs | white_background | hair_between_eyes | short_shorts | simple_background | very_long_hair | collarbone | short_sleeves | white_shirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:----------|:---------|:-------|:-------------|:----------|:-----------|:-----|:----------------|:--------|:--------|:--------------|:--------|:----------|:-----------|:-------|:--------|:---------------|:----------|:-----------------|:---------|:-----------|:-----------------|:----------------|:-----------------|:--------------------|:--------|:-----------|:-----------------|:--------------|:---------|:-----------|:--------|:-----------------|:-----------|:-------------------|:--------|:-------------------|:--------------------|:---------------|:--------------------|:-----------------|:-------------|:----------------|:--------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 19 |  |  |  |  |  | X | | | | X | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | | | X | X | X | | X | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | | | X | X | | | | | X | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | | | | X | X | | | | X | X | | | X | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | | | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | | X | X | | | X | | | X | X | | | X | | | | | | | X | | | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X |
| CyberHarem/ganaha_hibiki_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T19:28:39+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T21:00:03+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of ganaha\_hibiki/我那覇響 (THE iDOLM@STER)
===============================================
This is the dataset of ganaha\_hibiki/我那覇響 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'long\_hair, black\_hair, ponytail, blue\_eyes, fang, earrings, antenna\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
317b66539ccce7fc8fcb398d81fdb97f98d5ab6f |
# Dataset Card for Evaluation run of maywell/PiVoT-SUS-RP
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [maywell/PiVoT-SUS-RP](https://huggingface.co/maywell/PiVoT-SUS-RP) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maywell__PiVoT-SUS-RP",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T19:33:37.820287](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__PiVoT-SUS-RP/blob/main/results_2024-01-15T19-33-37.820287.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7586143927988348,
"acc_stderr": 0.028201585690631335,
"acc_norm": 0.7620604659128375,
"acc_norm_stderr": 0.028744091509590272,
"mc1": 0.3843329253365973,
"mc1_stderr": 0.017028707301245203,
"mc2": 0.5456609919227617,
"mc2_stderr": 0.014778672831926782
},
"harness|arc:challenge|25": {
"acc": 0.643344709897611,
"acc_stderr": 0.013998056902620192,
"acc_norm": 0.6655290102389079,
"acc_norm_stderr": 0.013787460322441375
},
"harness|hellaswag|10": {
"acc": 0.6398127862975503,
"acc_stderr": 0.00479073468370459,
"acc_norm": 0.8422624975104561,
"acc_norm_stderr": 0.0036374977089340356
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7111111111111111,
"acc_stderr": 0.0391545063041425,
"acc_norm": 0.7111111111111111,
"acc_norm_stderr": 0.0391545063041425
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.9013157894736842,
"acc_stderr": 0.024270227737522715,
"acc_norm": 0.9013157894736842,
"acc_norm_stderr": 0.024270227737522715
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8113207547169812,
"acc_stderr": 0.024079995130062253,
"acc_norm": 0.8113207547169812,
"acc_norm_stderr": 0.024079995130062253
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9097222222222222,
"acc_stderr": 0.023964965777906935,
"acc_norm": 0.9097222222222222,
"acc_norm_stderr": 0.023964965777906935
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.45,
"acc_stderr": 0.04999999999999999,
"acc_norm": 0.45,
"acc_norm_stderr": 0.04999999999999999
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7341040462427746,
"acc_stderr": 0.03368762932259431,
"acc_norm": 0.7341040462427746,
"acc_norm_stderr": 0.03368762932259431
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7787234042553192,
"acc_stderr": 0.027136349602424063,
"acc_norm": 0.7787234042553192,
"acc_norm_stderr": 0.027136349602424063
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5789473684210527,
"acc_stderr": 0.046446020912223177,
"acc_norm": 0.5789473684210527,
"acc_norm_stderr": 0.046446020912223177
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7655172413793103,
"acc_stderr": 0.035306258743465914,
"acc_norm": 0.7655172413793103,
"acc_norm_stderr": 0.035306258743465914
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6931216931216931,
"acc_stderr": 0.023752928712112136,
"acc_norm": 0.6931216931216931,
"acc_norm_stderr": 0.023752928712112136
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5952380952380952,
"acc_stderr": 0.04390259265377563,
"acc_norm": 0.5952380952380952,
"acc_norm_stderr": 0.04390259265377563
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8935483870967742,
"acc_stderr": 0.01754510295165663,
"acc_norm": 0.8935483870967742,
"acc_norm_stderr": 0.01754510295165663
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6551724137931034,
"acc_stderr": 0.03344283744280458,
"acc_norm": 0.6551724137931034,
"acc_norm_stderr": 0.03344283744280458
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8666666666666667,
"acc_stderr": 0.026544435312706463,
"acc_norm": 0.8666666666666667,
"acc_norm_stderr": 0.026544435312706463
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9242424242424242,
"acc_stderr": 0.018852670234993093,
"acc_norm": 0.9242424242424242,
"acc_norm_stderr": 0.018852670234993093
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9689119170984456,
"acc_stderr": 0.012525310625527033,
"acc_norm": 0.9689119170984456,
"acc_norm_stderr": 0.012525310625527033
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8153846153846154,
"acc_stderr": 0.019671632413100295,
"acc_norm": 0.8153846153846154,
"acc_norm_stderr": 0.019671632413100295
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.43333333333333335,
"acc_stderr": 0.030213340289237927,
"acc_norm": 0.43333333333333335,
"acc_norm_stderr": 0.030213340289237927
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8697478991596639,
"acc_stderr": 0.021863258494852118,
"acc_norm": 0.8697478991596639,
"acc_norm_stderr": 0.021863258494852118
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.48344370860927155,
"acc_stderr": 0.040802441856289715,
"acc_norm": 0.48344370860927155,
"acc_norm_stderr": 0.040802441856289715
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9211009174311927,
"acc_stderr": 0.011558198113769578,
"acc_norm": 0.9211009174311927,
"acc_norm_stderr": 0.011558198113769578
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6527777777777778,
"acc_stderr": 0.032468872436376486,
"acc_norm": 0.6527777777777778,
"acc_norm_stderr": 0.032468872436376486
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9264705882352942,
"acc_stderr": 0.018318855850089678,
"acc_norm": 0.9264705882352942,
"acc_norm_stderr": 0.018318855850089678
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9113924050632911,
"acc_stderr": 0.018498315206865384,
"acc_norm": 0.9113924050632911,
"acc_norm_stderr": 0.018498315206865384
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7937219730941704,
"acc_stderr": 0.02715715047956382,
"acc_norm": 0.7937219730941704,
"acc_norm_stderr": 0.02715715047956382
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8473282442748091,
"acc_stderr": 0.031545216720054725,
"acc_norm": 0.8473282442748091,
"acc_norm_stderr": 0.031545216720054725
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8842975206611571,
"acc_stderr": 0.029199802455622793,
"acc_norm": 0.8842975206611571,
"acc_norm_stderr": 0.029199802455622793
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.03038159675665168,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.03038159675665168
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8773006134969326,
"acc_stderr": 0.025777328426978927,
"acc_norm": 0.8773006134969326,
"acc_norm_stderr": 0.025777328426978927
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5625,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.912621359223301,
"acc_stderr": 0.027960689125970654,
"acc_norm": 0.912621359223301,
"acc_norm_stderr": 0.027960689125970654
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9316239316239316,
"acc_stderr": 0.016534627684311364,
"acc_norm": 0.9316239316239316,
"acc_norm_stderr": 0.016534627684311364
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.85,
"acc_stderr": 0.035887028128263714,
"acc_norm": 0.85,
"acc_norm_stderr": 0.035887028128263714
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9054916985951469,
"acc_stderr": 0.010461015338193068,
"acc_norm": 0.9054916985951469,
"acc_norm_stderr": 0.010461015338193068
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8121387283236994,
"acc_stderr": 0.021029269752423228,
"acc_norm": 0.8121387283236994,
"acc_norm_stderr": 0.021029269752423228
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6212290502793296,
"acc_stderr": 0.016223533510365123,
"acc_norm": 0.6212290502793296,
"acc_norm_stderr": 0.016223533510365123
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8562091503267973,
"acc_stderr": 0.020091188936043693,
"acc_norm": 0.8562091503267973,
"acc_norm_stderr": 0.020091188936043693
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8231511254019293,
"acc_stderr": 0.0216700588855108,
"acc_norm": 0.8231511254019293,
"acc_norm_stderr": 0.0216700588855108
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8703703703703703,
"acc_stderr": 0.018689725721062072,
"acc_norm": 0.8703703703703703,
"acc_norm_stderr": 0.018689725721062072
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6347517730496454,
"acc_stderr": 0.028723863853281267,
"acc_norm": 0.6347517730496454,
"acc_norm_stderr": 0.028723863853281267
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.6075619295958279,
"acc_stderr": 0.012471243669229096,
"acc_norm": 0.6075619295958279,
"acc_norm_stderr": 0.012471243669229096
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.02315746830855936,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.02315746830855936
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.015908290136278036,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.015908290136278036
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8326530612244898,
"acc_stderr": 0.02389714476891452,
"acc_norm": 0.8326530612244898,
"acc_norm_stderr": 0.02389714476891452
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8905472636815921,
"acc_stderr": 0.022076326101824664,
"acc_norm": 0.8905472636815921,
"acc_norm_stderr": 0.022076326101824664
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.92,
"acc_stderr": 0.0272659924344291,
"acc_norm": 0.92,
"acc_norm_stderr": 0.0272659924344291
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5783132530120482,
"acc_stderr": 0.038444531817709175,
"acc_norm": 0.5783132530120482,
"acc_norm_stderr": 0.038444531817709175
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.9005847953216374,
"acc_stderr": 0.022949025579355027,
"acc_norm": 0.9005847953216374,
"acc_norm_stderr": 0.022949025579355027
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3843329253365973,
"mc1_stderr": 0.017028707301245203,
"mc2": 0.5456609919227617,
"mc2_stderr": 0.014778672831926782
},
"harness|winogrande|5": {
"acc": 0.8334648776637726,
"acc_stderr": 0.010470796496781105
},
"harness|gsm8k|5": {
"acc": 0.7050796057619408,
"acc_stderr": 0.012560698010954762
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_maywell__PiVoT-SUS-RP | [
"region:us"
] | 2024-01-15T19:35:51+00:00 | {"pretty_name": "Evaluation run of maywell/PiVoT-SUS-RP", "dataset_summary": "Dataset automatically created during the evaluation run of model [maywell/PiVoT-SUS-RP](https://huggingface.co/maywell/PiVoT-SUS-RP) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maywell__PiVoT-SUS-RP\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T19:33:37.820287](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__PiVoT-SUS-RP/blob/main/results_2024-01-15T19-33-37.820287.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7586143927988348,\n \"acc_stderr\": 0.028201585690631335,\n \"acc_norm\": 0.7620604659128375,\n \"acc_norm_stderr\": 0.028744091509590272,\n \"mc1\": 0.3843329253365973,\n \"mc1_stderr\": 0.017028707301245203,\n \"mc2\": 0.5456609919227617,\n \"mc2_stderr\": 0.014778672831926782\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.643344709897611,\n \"acc_stderr\": 0.013998056902620192,\n \"acc_norm\": 0.6655290102389079,\n \"acc_norm_stderr\": 0.013787460322441375\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6398127862975503,\n \"acc_stderr\": 0.00479073468370459,\n \"acc_norm\": 0.8422624975104561,\n \"acc_norm_stderr\": 0.0036374977089340356\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7111111111111111,\n \"acc_stderr\": 0.0391545063041425,\n \"acc_norm\": 0.7111111111111111,\n \"acc_norm_stderr\": 0.0391545063041425\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.9013157894736842,\n \"acc_stderr\": 0.024270227737522715,\n \"acc_norm\": 0.9013157894736842,\n \"acc_norm_stderr\": 0.024270227737522715\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8113207547169812,\n \"acc_stderr\": 0.024079995130062253,\n \"acc_norm\": 0.8113207547169812,\n \"acc_norm_stderr\": 0.024079995130062253\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9097222222222222,\n \"acc_stderr\": 0.023964965777906935,\n \"acc_norm\": 0.9097222222222222,\n \"acc_norm_stderr\": 0.023964965777906935\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.04999999999999999,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.04999999999999999\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.03368762932259431,\n \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.03368762932259431\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7787234042553192,\n \"acc_stderr\": 0.027136349602424063,\n \"acc_norm\": 0.7787234042553192,\n \"acc_norm_stderr\": 0.027136349602424063\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5789473684210527,\n \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.5789473684210527,\n \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7655172413793103,\n \"acc_stderr\": 0.035306258743465914,\n \"acc_norm\": 0.7655172413793103,\n \"acc_norm_stderr\": 0.035306258743465914\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.6931216931216931,\n \"acc_stderr\": 0.023752928712112136,\n \"acc_norm\": 0.6931216931216931,\n \"acc_norm_stderr\": 0.023752928712112136\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5952380952380952,\n \"acc_stderr\": 0.04390259265377563,\n \"acc_norm\": 0.5952380952380952,\n \"acc_norm_stderr\": 0.04390259265377563\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8935483870967742,\n \"acc_stderr\": 0.01754510295165663,\n \"acc_norm\": 0.8935483870967742,\n \"acc_norm_stderr\": 0.01754510295165663\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6551724137931034,\n \"acc_stderr\": 0.03344283744280458,\n \"acc_norm\": 0.6551724137931034,\n \"acc_norm_stderr\": 0.03344283744280458\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8666666666666667,\n \"acc_stderr\": 0.026544435312706463,\n \"acc_norm\": 0.8666666666666667,\n \"acc_norm_stderr\": 0.026544435312706463\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9242424242424242,\n \"acc_stderr\": 0.018852670234993093,\n \"acc_norm\": 0.9242424242424242,\n \"acc_norm_stderr\": 0.018852670234993093\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9689119170984456,\n \"acc_stderr\": 0.012525310625527033,\n \"acc_norm\": 0.9689119170984456,\n \"acc_norm_stderr\": 0.012525310625527033\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8153846153846154,\n \"acc_stderr\": 0.019671632413100295,\n \"acc_norm\": 0.8153846153846154,\n \"acc_norm_stderr\": 0.019671632413100295\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.43333333333333335,\n \"acc_stderr\": 0.030213340289237927,\n \"acc_norm\": 0.43333333333333335,\n \"acc_norm_stderr\": 0.030213340289237927\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8697478991596639,\n \"acc_stderr\": 0.021863258494852118,\n \"acc_norm\": 0.8697478991596639,\n \"acc_norm_stderr\": 0.021863258494852118\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.48344370860927155,\n \"acc_stderr\": 0.040802441856289715,\n \"acc_norm\": 0.48344370860927155,\n \"acc_norm_stderr\": 0.040802441856289715\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9211009174311927,\n \"acc_stderr\": 0.011558198113769578,\n \"acc_norm\": 0.9211009174311927,\n \"acc_norm_stderr\": 0.011558198113769578\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6527777777777778,\n \"acc_stderr\": 0.032468872436376486,\n \"acc_norm\": 0.6527777777777778,\n \"acc_norm_stderr\": 0.032468872436376486\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9264705882352942,\n \"acc_stderr\": 0.018318855850089678,\n \"acc_norm\": 0.9264705882352942,\n \"acc_norm_stderr\": 0.018318855850089678\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9113924050632911,\n \"acc_stderr\": 0.018498315206865384,\n \"acc_norm\": 0.9113924050632911,\n \"acc_norm_stderr\": 0.018498315206865384\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7937219730941704,\n \"acc_stderr\": 0.02715715047956382,\n \"acc_norm\": 0.7937219730941704,\n \"acc_norm_stderr\": 0.02715715047956382\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8473282442748091,\n \"acc_stderr\": 0.031545216720054725,\n \"acc_norm\": 0.8473282442748091,\n \"acc_norm_stderr\": 0.031545216720054725\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8842975206611571,\n \"acc_stderr\": 0.029199802455622793,\n \"acc_norm\": 0.8842975206611571,\n \"acc_norm_stderr\": 0.029199802455622793\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.03038159675665168,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.03038159675665168\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8773006134969326,\n \"acc_stderr\": 0.025777328426978927,\n \"acc_norm\": 0.8773006134969326,\n \"acc_norm_stderr\": 0.025777328426978927\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5625,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.5625,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.912621359223301,\n \"acc_stderr\": 0.027960689125970654,\n \"acc_norm\": 0.912621359223301,\n \"acc_norm_stderr\": 0.027960689125970654\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9316239316239316,\n \"acc_stderr\": 0.016534627684311364,\n \"acc_norm\": 0.9316239316239316,\n \"acc_norm_stderr\": 0.016534627684311364\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.035887028128263714,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.035887028128263714\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9054916985951469,\n \"acc_stderr\": 0.010461015338193068,\n \"acc_norm\": 0.9054916985951469,\n \"acc_norm_stderr\": 0.010461015338193068\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8121387283236994,\n \"acc_stderr\": 0.021029269752423228,\n \"acc_norm\": 0.8121387283236994,\n \"acc_norm_stderr\": 0.021029269752423228\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6212290502793296,\n \"acc_stderr\": 0.016223533510365123,\n \"acc_norm\": 0.6212290502793296,\n \"acc_norm_stderr\": 0.016223533510365123\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8562091503267973,\n \"acc_stderr\": 0.020091188936043693,\n \"acc_norm\": 0.8562091503267973,\n \"acc_norm_stderr\": 0.020091188936043693\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8231511254019293,\n \"acc_stderr\": 0.0216700588855108,\n \"acc_norm\": 0.8231511254019293,\n \"acc_norm_stderr\": 0.0216700588855108\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8703703703703703,\n \"acc_stderr\": 0.018689725721062072,\n \"acc_norm\": 0.8703703703703703,\n \"acc_norm_stderr\": 0.018689725721062072\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6347517730496454,\n \"acc_stderr\": 0.028723863853281267,\n \"acc_norm\": 0.6347517730496454,\n \"acc_norm_stderr\": 0.028723863853281267\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6075619295958279,\n \"acc_stderr\": 0.012471243669229096,\n \"acc_norm\": 0.6075619295958279,\n \"acc_norm_stderr\": 0.012471243669229096\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.02315746830855936,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.02315746830855936\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.8088235294117647,\n \"acc_stderr\": 0.015908290136278036,\n \"acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.015908290136278036\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8326530612244898,\n \"acc_stderr\": 0.02389714476891452,\n \"acc_norm\": 0.8326530612244898,\n \"acc_norm_stderr\": 0.02389714476891452\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8905472636815921,\n \"acc_stderr\": 0.022076326101824664,\n \"acc_norm\": 0.8905472636815921,\n \"acc_norm_stderr\": 0.022076326101824664\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n \"acc_stderr\": 0.038444531817709175,\n \"acc_norm\": 0.5783132530120482,\n \"acc_norm_stderr\": 0.038444531817709175\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.9005847953216374,\n \"acc_stderr\": 0.022949025579355027,\n \"acc_norm\": 0.9005847953216374,\n \"acc_norm_stderr\": 0.022949025579355027\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3843329253365973,\n \"mc1_stderr\": 0.017028707301245203,\n \"mc2\": 0.5456609919227617,\n \"mc2_stderr\": 0.014778672831926782\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8334648776637726,\n \"acc_stderr\": 0.010470796496781105\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7050796057619408,\n \"acc_stderr\": 0.012560698010954762\n }\n}\n```", "repo_url": "https://huggingface.co/maywell/PiVoT-SUS-RP", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|arc:challenge|25_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|gsm8k|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hellaswag|10_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T19-33-37.820287.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["**/details_harness|winogrande|5_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T19-33-37.820287.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T19_33_37.820287", "path": ["results_2024-01-15T19-33-37.820287.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T19-33-37.820287.parquet"]}]}]} | 2024-01-15T19:36:13+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of maywell/PiVoT-SUS-RP
Dataset automatically created during the evaluation run of model maywell/PiVoT-SUS-RP on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T19:33:37.820287(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of maywell/PiVoT-SUS-RP\n\n\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-SUS-RP on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T19:33:37.820287(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of maywell/PiVoT-SUS-RP\n\n\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-SUS-RP on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T19:33:37.820287(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
181,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of maywell/PiVoT-SUS-RP\n\n\n\nDataset automatically created during the evaluation run of model maywell/PiVoT-SUS-RP on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T19:33:37.820287(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
fb909eaf9af77d4dd1775ccd6bb8ebb3fca1a236 |
# Dataset of takatsuki_yayoi/高槻やよい/타카츠키야요이 (THE iDOLM@STER)
This is the dataset of takatsuki_yayoi/高槻やよい/타카츠키야요이 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `twintails, brown_hair, green_eyes, orange_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 472.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takatsuki_yayoi_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 329.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takatsuki_yayoi_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1106 | 666.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takatsuki_yayoi_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 441.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takatsuki_yayoi_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1106 | 851.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takatsuki_yayoi_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/takatsuki_yayoi_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------|
| 0 | 26 |  |  |  |  |  | 1girl, open_mouth, solo, raglan_sleeves, :d, blush |
| 1 | 5 |  |  |  |  |  | 1girl, dress, open_mouth, solo, blush, hair_flower, bouquet, :d, closed_eyes, outstretched_arms, petals |
| 2 | 5 |  |  |  |  |  | 1girl, ;d, blue_eyes, one_eye_closed, open_mouth, smile, solo, apron, long_hair |
| 3 | 6 |  |  |  |  |  | 1girl, open_mouth, smile, solo, bracelet, dress, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | solo | raglan_sleeves | :d | blush | dress | hair_flower | bouquet | closed_eyes | outstretched_arms | petals | ;d | blue_eyes | one_eye_closed | smile | apron | long_hair | bracelet | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------|:-----------------|:-----|:--------|:--------|:--------------|:----------|:--------------|:--------------------|:---------|:-----|:------------|:-----------------|:--------|:--------|:------------|:-----------|:-------------|
| 0 | 26 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | | | | | | | | | | X | X | X | X | X | X | | |
| 3 | 6 |  |  |  |  |  | X | X | X | | | | X | | | | | | | | | X | | | X | X |
| CyberHarem/takatsuki_yayoi_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T19:39:03+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T21:10:13+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of takatsuki\_yayoi/高槻やよい/타카츠키야요이 (THE iDOLM@STER)
==========================================================
This is the dataset of takatsuki\_yayoi/高槻やよい/타카츠키야요이 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'twintails, brown\_hair, green\_eyes, orange\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
a2be3d5b64d3d19b371bde4a790e27627d6376c5 | # Dataset Card for "pianofor-ai-base-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roszcz/pianofor-ai-base-v2 | [
"region:us"
] | 2024-01-15T19:48:57+00:00 | {"dataset_info": {"features": [{"name": "notes", "struct": [{"name": "end", "sequence": "float64"}, {"name": "pitch", "sequence": "int64"}, {"name": "start", "sequence": "float64"}, {"name": "velocity", "sequence": "int64"}]}, {"name": "control_changes", "struct": [{"name": "number", "sequence": "int64"}, {"name": "time", "sequence": "float64"}, {"name": "value", "sequence": "int64"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1323482766, "num_examples": 1237}], "download_size": 414443338, "dataset_size": 1323482766}} | 2024-01-15T19:55:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "pianofor-ai-base-v2"
More Information needed | [
"# Dataset Card for \"pianofor-ai-base-v2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"pianofor-ai-base-v2\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"pianofor-ai-base-v2\"\n\nMore Information needed"
] |
968e647a7035b712ef65b6702a6b20324a3b685e |
# Dataset Card for Evaluation run of nfaheem/Marcoroni-7b-DPO-Merge
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [nfaheem/Marcoroni-7b-DPO-Merge](https://huggingface.co/nfaheem/Marcoroni-7b-DPO-Merge) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_nfaheem__Marcoroni-7b-DPO-Merge",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T20:00:07.593855](https://huggingface.co/datasets/open-llm-leaderboard/details_nfaheem__Marcoroni-7b-DPO-Merge/blob/main/results_2024-01-15T20-00-07.593855.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6490988015714689,
"acc_stderr": 0.03218579004144051,
"acc_norm": 0.648058453091397,
"acc_norm_stderr": 0.03286452522752086,
"mc1": 0.5813953488372093,
"mc1_stderr": 0.017270015284476872,
"mc2": 0.7047247167623709,
"mc2_stderr": 0.015173458474994152
},
"harness|arc:challenge|25": {
"acc": 0.7167235494880546,
"acc_stderr": 0.013167478735134575,
"acc_norm": 0.7303754266211604,
"acc_norm_stderr": 0.012968040686869152
},
"harness|hellaswag|10": {
"acc": 0.7325234017128062,
"acc_stderr": 0.0044173841023986676,
"acc_norm": 0.8879705238000398,
"acc_norm_stderr": 0.003147581209374547
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6644736842105263,
"acc_stderr": 0.038424985593952694,
"acc_norm": 0.6644736842105263,
"acc_norm_stderr": 0.038424985593952694
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5574468085106383,
"acc_stderr": 0.03246956919789958,
"acc_norm": 0.5574468085106383,
"acc_norm_stderr": 0.03246956919789958
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370333,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370333
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.02530590624159063,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.02530590624159063
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7967741935483871,
"acc_stderr": 0.022891687984554963,
"acc_norm": 0.7967741935483871,
"acc_norm_stderr": 0.022891687984554963
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4876847290640394,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.4876847290640394,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033477,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033477
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.028317533496066485,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.028317533496066485
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135356,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135356
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8366972477064221,
"acc_stderr": 0.01584825580650155,
"acc_norm": 0.8366972477064221,
"acc_norm_stderr": 0.01584825580650155
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5,
"acc_stderr": 0.034099716973523674,
"acc_norm": 0.5,
"acc_norm_stderr": 0.034099716973523674
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.822477650063857,
"acc_stderr": 0.013664230995834841,
"acc_norm": 0.822477650063857,
"acc_norm_stderr": 0.013664230995834841
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.02402774515526502,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.02402774515526502
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.44692737430167595,
"acc_stderr": 0.016628030039647614,
"acc_norm": 0.44692737430167595,
"acc_norm_stderr": 0.016628030039647614
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7287581699346405,
"acc_stderr": 0.02545775669666788,
"acc_norm": 0.7287581699346405,
"acc_norm_stderr": 0.02545775669666788
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.02540383297817962,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.02540383297817962
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.023993501709042107,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.023993501709042107
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4667535853976532,
"acc_stderr": 0.012741974333897227,
"acc_norm": 0.4667535853976532,
"acc_norm_stderr": 0.012741974333897227
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.028332959514031208,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.028332959514031208
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6552287581699346,
"acc_stderr": 0.019228322018696644,
"acc_norm": 0.6552287581699346,
"acc_norm_stderr": 0.019228322018696644
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142773,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142773
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.026508590656233268,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.026508590656233268
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5783132530120482,
"acc_stderr": 0.03844453181770917,
"acc_norm": 0.5783132530120482,
"acc_norm_stderr": 0.03844453181770917
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.02917088550072767,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.02917088550072767
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5813953488372093,
"mc1_stderr": 0.017270015284476872,
"mc2": 0.7047247167623709,
"mc2_stderr": 0.015173458474994152
},
"harness|winogrande|5": {
"acc": 0.8524072612470402,
"acc_stderr": 0.00996871576547965
},
"harness|gsm8k|5": {
"acc": 0.6762699014404853,
"acc_stderr": 0.01288824739737114
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_nfaheem__Marcoroni-7b-DPO-Merge | [
"region:us"
] | 2024-01-15T20:02:26+00:00 | {"pretty_name": "Evaluation run of nfaheem/Marcoroni-7b-DPO-Merge", "dataset_summary": "Dataset automatically created during the evaluation run of model [nfaheem/Marcoroni-7b-DPO-Merge](https://huggingface.co/nfaheem/Marcoroni-7b-DPO-Merge) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_nfaheem__Marcoroni-7b-DPO-Merge\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T20:00:07.593855](https://huggingface.co/datasets/open-llm-leaderboard/details_nfaheem__Marcoroni-7b-DPO-Merge/blob/main/results_2024-01-15T20-00-07.593855.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6490988015714689,\n \"acc_stderr\": 0.03218579004144051,\n \"acc_norm\": 0.648058453091397,\n \"acc_norm_stderr\": 0.03286452522752086,\n \"mc1\": 0.5813953488372093,\n \"mc1_stderr\": 0.017270015284476872,\n \"mc2\": 0.7047247167623709,\n \"mc2_stderr\": 0.015173458474994152\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.7167235494880546,\n \"acc_stderr\": 0.013167478735134575,\n \"acc_norm\": 0.7303754266211604,\n \"acc_norm_stderr\": 0.012968040686869152\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7325234017128062,\n \"acc_stderr\": 0.0044173841023986676,\n \"acc_norm\": 0.8879705238000398,\n \"acc_norm_stderr\": 0.003147581209374547\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.038424985593952694,\n \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.038424985593952694\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.03246956919789958,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.03246956919789958\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.02530590624159063,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.02530590624159063\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7967741935483871,\n \"acc_stderr\": 0.022891687984554963,\n \"acc_norm\": 0.7967741935483871,\n \"acc_norm_stderr\": 0.022891687984554963\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586818,\n \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586818\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033477,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033477\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3148148148148148,\n \"acc_stderr\": 0.028317533496066485,\n \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.028317533496066485\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135356,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135356\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8366972477064221,\n \"acc_stderr\": 0.01584825580650155,\n \"acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.01584825580650155\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.034099716973523674,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.034099716973523674\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.822477650063857,\n \"acc_stderr\": 0.013664230995834841,\n \"acc_norm\": 0.822477650063857,\n \"acc_norm_stderr\": 0.013664230995834841\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.02402774515526502,\n \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.02402774515526502\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.44692737430167595,\n \"acc_stderr\": 0.016628030039647614,\n \"acc_norm\": 0.44692737430167595,\n \"acc_norm_stderr\": 0.016628030039647614\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7287581699346405,\n \"acc_stderr\": 0.02545775669666788,\n \"acc_norm\": 0.7287581699346405,\n \"acc_norm_stderr\": 0.02545775669666788\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n \"acc_stderr\": 0.02540383297817962,\n \"acc_norm\": 0.7234726688102894,\n \"acc_norm_stderr\": 0.02540383297817962\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7530864197530864,\n \"acc_stderr\": 0.023993501709042107,\n \"acc_norm\": 0.7530864197530864,\n \"acc_norm_stderr\": 0.023993501709042107\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4667535853976532,\n \"acc_stderr\": 0.012741974333897227,\n \"acc_norm\": 0.4667535853976532,\n \"acc_norm_stderr\": 0.012741974333897227\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.028332959514031208,\n \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.028332959514031208\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6552287581699346,\n \"acc_stderr\": 0.019228322018696644,\n \"acc_norm\": 0.6552287581699346,\n \"acc_norm_stderr\": 0.019228322018696644\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142773,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142773\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.026508590656233268,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.026508590656233268\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.5783132530120482,\n \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5813953488372093,\n \"mc1_stderr\": 0.017270015284476872,\n \"mc2\": 0.7047247167623709,\n \"mc2_stderr\": 0.015173458474994152\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8524072612470402,\n \"acc_stderr\": 0.00996871576547965\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6762699014404853,\n \"acc_stderr\": 0.01288824739737114\n }\n}\n```", "repo_url": "https://huggingface.co/nfaheem/Marcoroni-7b-DPO-Merge", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|arc:challenge|25_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|gsm8k|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hellaswag|10_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T20-00-07.593855.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["**/details_harness|winogrande|5_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T20-00-07.593855.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T20_00_07.593855", "path": ["results_2024-01-15T20-00-07.593855.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T20-00-07.593855.parquet"]}]}]} | 2024-01-15T20:02:49+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of nfaheem/Marcoroni-7b-DPO-Merge
Dataset automatically created during the evaluation run of model nfaheem/Marcoroni-7b-DPO-Merge on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T20:00:07.593855(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of nfaheem/Marcoroni-7b-DPO-Merge\n\n\n\nDataset automatically created during the evaluation run of model nfaheem/Marcoroni-7b-DPO-Merge on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T20:00:07.593855(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of nfaheem/Marcoroni-7b-DPO-Merge\n\n\n\nDataset automatically created during the evaluation run of model nfaheem/Marcoroni-7b-DPO-Merge on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T20:00:07.593855(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
193,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of nfaheem/Marcoroni-7b-DPO-Merge\n\n\n\nDataset automatically created during the evaluation run of model nfaheem/Marcoroni-7b-DPO-Merge on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T20:00:07.593855(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]"
] |
8ebea305409188e49d36b01067f3e643a72508aa |
# Dataset Card for Evaluation run of KnutJaegersberg/internlm-20b-llama
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [KnutJaegersberg/internlm-20b-llama](https://huggingface.co/KnutJaegersberg/internlm-20b-llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KnutJaegersberg__internlm-20b-llama",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T20:05:42.898260](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__internlm-20b-llama/blob/main/results_2024-01-15T20-05-42.898260.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.615870685495934,
"acc_stderr": 0.0325478099078455,
"acc_norm": 0.6193109107986048,
"acc_norm_stderr": 0.03319482956939103,
"mc1": 0.3953488372093023,
"mc1_stderr": 0.017115815632418197,
"mc2": 0.5771247160568813,
"mc2_stderr": 0.015353165521314794
},
"harness|arc:challenge|25": {
"acc": 0.5648464163822525,
"acc_stderr": 0.014487986197186045,
"acc_norm": 0.613481228668942,
"acc_norm_stderr": 0.014230084761910478
},
"harness|hellaswag|10": {
"acc": 0.6199960167297351,
"acc_stderr": 0.00484395433845144,
"acc_norm": 0.8207528380800637,
"acc_norm_stderr": 0.0038277525727700265
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.043163785995113245,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.043163785995113245
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6188679245283019,
"acc_stderr": 0.029890609686286637,
"acc_norm": 0.6188679245283019,
"acc_norm_stderr": 0.029890609686286637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566019,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566019
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6069364161849711,
"acc_stderr": 0.0372424959581773,
"acc_norm": 0.6069364161849711,
"acc_norm_stderr": 0.0372424959581773
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5106382978723404,
"acc_stderr": 0.03267862331014063,
"acc_norm": 0.5106382978723404,
"acc_norm_stderr": 0.03267862331014063
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3684210526315789,
"acc_stderr": 0.04537815354939392,
"acc_norm": 0.3684210526315789,
"acc_norm_stderr": 0.04537815354939392
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.025167982333894143,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.025167982333894143
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.04360314860077459,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.04360314860077459
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7451612903225806,
"acc_stderr": 0.024790118459332208,
"acc_norm": 0.7451612903225806,
"acc_norm_stderr": 0.024790118459332208
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.02985751567338642,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.02985751567338642
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.021995311963644234,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.021995311963644234
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6076923076923076,
"acc_stderr": 0.02475600038213095,
"acc_norm": 0.6076923076923076,
"acc_norm_stderr": 0.02475600038213095
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.027940457136228405,
"acc_norm": 0.3,
"acc_norm_stderr": 0.027940457136228405
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5798319327731093,
"acc_stderr": 0.03206183783236152,
"acc_norm": 0.5798319327731093,
"acc_norm_stderr": 0.03206183783236152
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.015990154885073382,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.015990154885073382
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5787037037037037,
"acc_stderr": 0.03367462138896078,
"acc_norm": 0.5787037037037037,
"acc_norm_stderr": 0.03367462138896078
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.02812597226565437,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.02812597226565437
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.025310495376944856,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.025310495376944856
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.672645739910314,
"acc_stderr": 0.03149384670994131,
"acc_norm": 0.672645739910314,
"acc_norm_stderr": 0.03149384670994131
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6793893129770993,
"acc_stderr": 0.04093329229834278,
"acc_norm": 0.6793893129770993,
"acc_norm_stderr": 0.04093329229834278
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8504273504273504,
"acc_stderr": 0.023365051491753715,
"acc_norm": 0.8504273504273504,
"acc_norm_stderr": 0.023365051491753715
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7854406130268199,
"acc_stderr": 0.014680033956893346,
"acc_norm": 0.7854406130268199,
"acc_norm_stderr": 0.014680033956893346
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6994219653179191,
"acc_stderr": 0.024685316867257796,
"acc_norm": 0.6994219653179191,
"acc_norm_stderr": 0.024685316867257796
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2659217877094972,
"acc_stderr": 0.014776765066438883,
"acc_norm": 0.2659217877094972,
"acc_norm_stderr": 0.014776765066438883
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7058823529411765,
"acc_stderr": 0.02609016250427905,
"acc_norm": 0.7058823529411765,
"acc_norm_stderr": 0.02609016250427905
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6495176848874598,
"acc_stderr": 0.027098652621301754,
"acc_norm": 0.6495176848874598,
"acc_norm_stderr": 0.027098652621301754
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7098765432098766,
"acc_stderr": 0.025251173936495026,
"acc_norm": 0.7098765432098766,
"acc_norm_stderr": 0.025251173936495026
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.029766675075873866,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.029766675075873866
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47392438070404175,
"acc_stderr": 0.012752858346533136,
"acc_norm": 0.47392438070404175,
"acc_norm_stderr": 0.012752858346533136
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5919117647058824,
"acc_stderr": 0.029855261393483924,
"acc_norm": 0.5919117647058824,
"acc_norm_stderr": 0.029855261393483924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6372549019607843,
"acc_stderr": 0.01945076843250551,
"acc_norm": 0.6372549019607843,
"acc_norm_stderr": 0.01945076843250551
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.02916273841024978,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.02916273841024978
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.0312678171466318,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.0312678171466318
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3953488372093023,
"mc1_stderr": 0.017115815632418197,
"mc2": 0.5771247160568813,
"mc2_stderr": 0.015353165521314794
},
"harness|winogrande|5": {
"acc": 0.7671665351223362,
"acc_stderr": 0.011878201073856542
},
"harness|gsm8k|5": {
"acc": 0.5109931766489765,
"acc_stderr": 0.013769155509690904
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_KnutJaegersberg__internlm-20b-llama | [
"region:us"
] | 2024-01-15T20:07:52+00:00 | {"pretty_name": "Evaluation run of KnutJaegersberg/internlm-20b-llama", "dataset_summary": "Dataset automatically created during the evaluation run of model [KnutJaegersberg/internlm-20b-llama](https://huggingface.co/KnutJaegersberg/internlm-20b-llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KnutJaegersberg__internlm-20b-llama\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T20:05:42.898260](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__internlm-20b-llama/blob/main/results_2024-01-15T20-05-42.898260.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.615870685495934,\n \"acc_stderr\": 0.0325478099078455,\n \"acc_norm\": 0.6193109107986048,\n \"acc_norm_stderr\": 0.03319482956939103,\n \"mc1\": 0.3953488372093023,\n \"mc1_stderr\": 0.017115815632418197,\n \"mc2\": 0.5771247160568813,\n \"mc2_stderr\": 0.015353165521314794\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5648464163822525,\n \"acc_stderr\": 0.014487986197186045,\n \"acc_norm\": 0.613481228668942,\n \"acc_norm_stderr\": 0.014230084761910478\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6199960167297351,\n \"acc_stderr\": 0.00484395433845144,\n \"acc_norm\": 0.8207528380800637,\n \"acc_norm_stderr\": 0.0038277525727700265\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6188679245283019,\n \"acc_stderr\": 0.029890609686286637,\n \"acc_norm\": 0.6188679245283019,\n \"acc_norm_stderr\": 0.029890609686286637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n \"acc_stderr\": 0.03716177437566019,\n \"acc_norm\": 0.7291666666666666,\n \"acc_norm_stderr\": 0.03716177437566019\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.0372424959581773,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.0372424959581773\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107224,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107224\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.03267862331014063,\n \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.03267862331014063\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3684210526315789,\n \"acc_stderr\": 0.04537815354939392,\n \"acc_norm\": 0.3684210526315789,\n \"acc_norm_stderr\": 0.04537815354939392\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3941798941798942,\n \"acc_stderr\": 0.025167982333894143,\n \"acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.025167982333894143\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7451612903225806,\n \"acc_stderr\": 0.024790118459332208,\n \"acc_norm\": 0.7451612903225806,\n \"acc_norm_stderr\": 0.024790118459332208\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7727272727272727,\n \"acc_stderr\": 0.02985751567338642,\n \"acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.02985751567338642\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644234,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644234\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6076923076923076,\n \"acc_stderr\": 0.02475600038213095,\n \"acc_norm\": 0.6076923076923076,\n \"acc_norm_stderr\": 0.02475600038213095\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.027940457136228405,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.027940457136228405\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5798319327731093,\n \"acc_stderr\": 0.03206183783236152,\n \"acc_norm\": 0.5798319327731093,\n \"acc_norm_stderr\": 0.03206183783236152\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8330275229357799,\n \"acc_stderr\": 0.015990154885073382,\n \"acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.015990154885073382\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.03367462138896078,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.03367462138896078\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7990196078431373,\n \"acc_stderr\": 0.02812597226565437,\n \"acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.02812597226565437\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8143459915611815,\n \"acc_stderr\": 0.025310495376944856,\n \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.025310495376944856\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.672645739910314,\n \"acc_stderr\": 0.03149384670994131,\n \"acc_norm\": 0.672645739910314,\n \"acc_norm_stderr\": 0.03149384670994131\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6793893129770993,\n \"acc_stderr\": 0.04093329229834278,\n \"acc_norm\": 0.6793893129770993,\n \"acc_norm_stderr\": 0.04093329229834278\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n \"acc_stderr\": 0.023365051491753715,\n \"acc_norm\": 0.8504273504273504,\n \"acc_norm_stderr\": 0.023365051491753715\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7854406130268199,\n \"acc_stderr\": 0.014680033956893346,\n \"acc_norm\": 0.7854406130268199,\n \"acc_norm_stderr\": 0.014680033956893346\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6994219653179191,\n \"acc_stderr\": 0.024685316867257796,\n \"acc_norm\": 0.6994219653179191,\n \"acc_norm_stderr\": 0.024685316867257796\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2659217877094972,\n \"acc_stderr\": 0.014776765066438883,\n \"acc_norm\": 0.2659217877094972,\n \"acc_norm_stderr\": 0.014776765066438883\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.02609016250427905,\n \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.02609016250427905\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6495176848874598,\n \"acc_stderr\": 0.027098652621301754,\n \"acc_norm\": 0.6495176848874598,\n \"acc_norm_stderr\": 0.027098652621301754\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495026,\n \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495026\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47392438070404175,\n \"acc_stderr\": 0.012752858346533136,\n \"acc_norm\": 0.47392438070404175,\n \"acc_norm_stderr\": 0.012752858346533136\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5919117647058824,\n \"acc_stderr\": 0.029855261393483924,\n \"acc_norm\": 0.5919117647058824,\n \"acc_norm_stderr\": 0.029855261393483924\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6372549019607843,\n \"acc_stderr\": 0.01945076843250551,\n \"acc_norm\": 0.6372549019607843,\n \"acc_norm_stderr\": 0.01945076843250551\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.02916273841024978,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.02916273841024978\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.0312678171466318,\n \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.0312678171466318\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3953488372093023,\n \"mc1_stderr\": 0.017115815632418197,\n \"mc2\": 0.5771247160568813,\n \"mc2_stderr\": 0.015353165521314794\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7671665351223362,\n \"acc_stderr\": 0.011878201073856542\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5109931766489765,\n \"acc_stderr\": 0.013769155509690904\n }\n}\n```", "repo_url": "https://huggingface.co/KnutJaegersberg/internlm-20b-llama", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|arc:challenge|25_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|gsm8k|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hellaswag|10_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T20-05-42.898260.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["**/details_harness|winogrande|5_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T20-05-42.898260.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T20_05_42.898260", "path": ["results_2024-01-15T20-05-42.898260.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T20-05-42.898260.parquet"]}]}]} | 2024-01-15T20:08:14+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of KnutJaegersberg/internlm-20b-llama
Dataset automatically created during the evaluation run of model KnutJaegersberg/internlm-20b-llama on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T20:05:42.898260(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of KnutJaegersberg/internlm-20b-llama\n\n\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/internlm-20b-llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T20:05:42.898260(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of KnutJaegersberg/internlm-20b-llama\n\n\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/internlm-20b-llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T20:05:42.898260(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
189,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of KnutJaegersberg/internlm-20b-llama\n\n\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/internlm-20b-llama on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T20:05:42.898260(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
a89208606320e921a57fdb779125a1f22a9d8cbe |
# Dataset Card for Evaluation run of cognitivecomputations/laserxtral
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [cognitivecomputations/laserxtral](https://huggingface.co/cognitivecomputations/laserxtral) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_cognitivecomputations__laserxtral",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T20:05:55.052145](https://huggingface.co/datasets/open-llm-leaderboard/details_cognitivecomputations__laserxtral/blob/main/results_2024-01-15T20-05-55.052145.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6512747160527517,
"acc_stderr": 0.03203788592010298,
"acc_norm": 0.6512676159278845,
"acc_norm_stderr": 0.03269445186595507,
"mc1": 0.4773561811505508,
"mc1_stderr": 0.01748554225848965,
"mc2": 0.638014374039814,
"mc2_stderr": 0.01550457171837664
},
"harness|arc:challenge|25": {
"acc": 0.6697952218430034,
"acc_stderr": 0.013743085603760424,
"acc_norm": 0.6902730375426621,
"acc_norm_stderr": 0.013512058415238363
},
"harness|hellaswag|10": {
"acc": 0.6931886078470424,
"acc_stderr": 0.004602279238122065,
"acc_norm": 0.8675562636924915,
"acc_norm_stderr": 0.0033827979075230284
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7169811320754716,
"acc_stderr": 0.027724236492700914,
"acc_norm": 0.7169811320754716,
"acc_norm_stderr": 0.027724236492700914
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663454,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663454
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816508,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816508
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.02546714904546955,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.02546714904546955
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782648,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782648
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.029376616484945633,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.029376616484945633
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033477,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033477
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563976,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563976
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616255,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.034076320938540516,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.034076320938540516
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233494,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233494
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.816793893129771,
"acc_stderr": 0.03392770926494733,
"acc_norm": 0.816793893129771,
"acc_norm_stderr": 0.03392770926494733
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.036809181416738807,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.036809181416738807
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165612,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165612
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8378033205619413,
"acc_stderr": 0.01318222261672088,
"acc_norm": 0.8378033205619413,
"acc_norm_stderr": 0.01318222261672088
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069356,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069356
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3877094972067039,
"acc_stderr": 0.016295332328155807,
"acc_norm": 0.3877094972067039,
"acc_norm_stderr": 0.016295332328155807
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.025646863097137897,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.025646863097137897
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7041800643086816,
"acc_stderr": 0.025922371788818763,
"acc_norm": 0.7041800643086816,
"acc_norm_stderr": 0.025922371788818763
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.023788583551658533,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.023788583551658533
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48226950354609927,
"acc_stderr": 0.02980873964223777,
"acc_norm": 0.48226950354609927,
"acc_norm_stderr": 0.02980873964223777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4556714471968709,
"acc_stderr": 0.012719949543032205,
"acc_norm": 0.4556714471968709,
"acc_norm_stderr": 0.012719949543032205
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6580882352941176,
"acc_stderr": 0.028814722422254184,
"acc_norm": 0.6580882352941176,
"acc_norm_stderr": 0.028814722422254184
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6813725490196079,
"acc_stderr": 0.01885008469646872,
"acc_norm": 0.6813725490196079,
"acc_norm_stderr": 0.01885008469646872
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.02853556033712844,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.02853556033712844
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.02619392354445412,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.02619392354445412
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4773561811505508,
"mc1_stderr": 0.01748554225848965,
"mc2": 0.638014374039814,
"mc2_stderr": 0.01550457171837664
},
"harness|winogrande|5": {
"acc": 0.8003157063930545,
"acc_stderr": 0.011235328382625842
},
"harness|gsm8k|5": {
"acc": 0.6974981046247157,
"acc_stderr": 0.012652544133186141
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_cognitivecomputations__laserxtral | [
"region:us"
] | 2024-01-15T20:08:12+00:00 | {"pretty_name": "Evaluation run of cognitivecomputations/laserxtral", "dataset_summary": "Dataset automatically created during the evaluation run of model [cognitivecomputations/laserxtral](https://huggingface.co/cognitivecomputations/laserxtral) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cognitivecomputations__laserxtral\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T20:05:55.052145](https://huggingface.co/datasets/open-llm-leaderboard/details_cognitivecomputations__laserxtral/blob/main/results_2024-01-15T20-05-55.052145.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6512747160527517,\n \"acc_stderr\": 0.03203788592010298,\n \"acc_norm\": 0.6512676159278845,\n \"acc_norm_stderr\": 0.03269445186595507,\n \"mc1\": 0.4773561811505508,\n \"mc1_stderr\": 0.01748554225848965,\n \"mc2\": 0.638014374039814,\n \"mc2_stderr\": 0.01550457171837664\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6697952218430034,\n \"acc_stderr\": 0.013743085603760424,\n \"acc_norm\": 0.6902730375426621,\n \"acc_norm_stderr\": 0.013512058415238363\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6931886078470424,\n \"acc_stderr\": 0.004602279238122065,\n \"acc_norm\": 0.8675562636924915,\n \"acc_norm_stderr\": 0.0033827979075230284\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7169811320754716,\n \"acc_stderr\": 0.027724236492700914,\n \"acc_norm\": 0.7169811320754716,\n \"acc_norm_stderr\": 0.027724236492700914\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663454,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663454\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816508,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816508\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42592592592592593,\n \"acc_stderr\": 0.02546714904546955,\n \"acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.02546714904546955\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.023904914311782648,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.023904914311782648\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033477,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033477\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.48148148148148145,\n \"acc_stderr\": 0.034076320938540516,\n \"acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.034076320938540516\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233494,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233494\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.816793893129771,\n \"acc_stderr\": 0.03392770926494733,\n \"acc_norm\": 0.816793893129771,\n \"acc_norm_stderr\": 0.03392770926494733\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165612,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165612\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8378033205619413,\n \"acc_stderr\": 0.01318222261672088,\n \"acc_norm\": 0.8378033205619413,\n \"acc_norm_stderr\": 0.01318222261672088\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069356,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069356\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3877094972067039,\n \"acc_stderr\": 0.016295332328155807,\n \"acc_norm\": 0.3877094972067039,\n \"acc_norm_stderr\": 0.016295332328155807\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.025646863097137897,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.025646863097137897\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n \"acc_stderr\": 0.025922371788818763,\n \"acc_norm\": 0.7041800643086816,\n \"acc_norm_stderr\": 0.025922371788818763\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.023788583551658533,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.023788583551658533\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4556714471968709,\n \"acc_stderr\": 0.012719949543032205,\n \"acc_norm\": 0.4556714471968709,\n \"acc_norm_stderr\": 0.012719949543032205\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6580882352941176,\n \"acc_stderr\": 0.028814722422254184,\n \"acc_norm\": 0.6580882352941176,\n \"acc_norm_stderr\": 0.028814722422254184\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6813725490196079,\n \"acc_stderr\": 0.01885008469646872,\n \"acc_norm\": 0.6813725490196079,\n \"acc_norm_stderr\": 0.01885008469646872\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.02853556033712844,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.02853556033712844\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.02619392354445412,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.02619392354445412\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4773561811505508,\n \"mc1_stderr\": 0.01748554225848965,\n \"mc2\": 0.638014374039814,\n \"mc2_stderr\": 0.01550457171837664\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8003157063930545,\n \"acc_stderr\": 0.011235328382625842\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6974981046247157,\n \"acc_stderr\": 0.012652544133186141\n }\n}\n```", "repo_url": "https://huggingface.co/cognitivecomputations/laserxtral", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|arc:challenge|25_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|gsm8k|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hellaswag|10_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T20-05-55.052145.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["**/details_harness|winogrande|5_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T20-05-55.052145.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T20_05_55.052145", "path": ["results_2024-01-15T20-05-55.052145.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T20-05-55.052145.parquet"]}]}]} | 2024-01-15T20:08:34+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of cognitivecomputations/laserxtral
Dataset automatically created during the evaluation run of model cognitivecomputations/laserxtral on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T20:05:55.052145(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of cognitivecomputations/laserxtral\n\n\n\nDataset automatically created during the evaluation run of model cognitivecomputations/laserxtral on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T20:05:55.052145(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of cognitivecomputations/laserxtral\n\n\n\nDataset automatically created during the evaluation run of model cognitivecomputations/laserxtral on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T20:05:55.052145(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
177,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of cognitivecomputations/laserxtral\n\n\n\nDataset automatically created during the evaluation run of model cognitivecomputations/laserxtral on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T20:05:55.052145(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
147677d830b3084b942ab13c0f9e8fbfc36cd751 |
# Dataset of minase_iori/水瀬伊織/미나세이오리 (THE iDOLM@STER)
This is the dataset of minase_iori/水瀬伊織/미나세이오리 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `long_hair, brown_hair, hairband, brown_eyes, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 496.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minase_iori_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 338.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minase_iori_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1126 | 664.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minase_iori_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 457.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minase_iori_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1126 | 860.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minase_iori_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/minase_iori_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, dress, solo, blush, black_thighhighs, bow, zettai_ryouiki |
| 1 | 7 |  |  |  |  |  | 1girl, dress, rabbit, solo, stuffed_animal, stuffed_bunny, blush, open_mouth, sitting, smile |
| 2 | 7 |  |  |  |  |  | 1girl, solo, stuffed_animal, stuffed_bunny, dress, smile, one_eye_closed |
| 3 | 5 |  |  |  |  |  | 1girl, black_thighhighs, skirt, solo, zettai_ryouiki, plaid, smile, bespectacled, necktie |
| 4 | 6 |  |  |  |  |  | 1girl, bracelet, solo, dress, bare_shoulders, blush, looking_at_viewer, smile, open_mouth |
| 5 | 16 |  |  |  |  |  | 1girl, necklace, solo, beret, dress, thighhighs, belt, smile, earrings, one_eye_closed, wrist_cuffs, bare_shoulders, open_mouth |
| 6 | 6 |  |  |  |  |  | 1girl, solo, looking_at_viewer, sailor_bikini, white_bikini, blush, navel, sitting, bow, breasts, open_mouth, simple_background, smile, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dress | solo | blush | black_thighhighs | bow | zettai_ryouiki | rabbit | stuffed_animal | stuffed_bunny | open_mouth | sitting | smile | one_eye_closed | skirt | plaid | bespectacled | necktie | bracelet | bare_shoulders | looking_at_viewer | necklace | beret | thighhighs | belt | earrings | wrist_cuffs | sailor_bikini | white_bikini | navel | breasts | simple_background | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------|:-------------------|:------|:-----------------|:---------|:-----------------|:----------------|:-------------|:----------|:--------|:-----------------|:--------|:--------|:---------------|:----------|:-----------|:-----------------|:--------------------|:-----------|:--------|:-------------|:-------|:-----------|:--------------|:----------------|:---------------|:--------|:----------|:--------------------|:-------------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | | | | | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | | X | | X | | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | X | X | | | | | | | X | | X | | | | | | X | X | X | | | | | | | | | | | | |
| 5 | 16 |  |  |  |  |  | X | X | X | | | | | | | | X | | X | X | | | | | | X | | X | X | X | X | X | X | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | X | X | | X | | | | | X | X | X | | | | | | | | X | | | | | | | X | X | X | X | X | X |
| CyberHarem/minase_iori_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T20:31:13+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:01:18+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of minase\_iori/水瀬伊織/미나세이오리 (THE iDOLM@STER)
====================================================
This is the dataset of minase\_iori/水瀬伊織/미나세이오리 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'long\_hair, brown\_hair, hairband, brown\_eyes, red\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d88a448799e1b45fe33ab376dc4ec157aa99b968 |
# Dataset of hagiwara_yukiho/萩原雪歩 (THE iDOLM@STER)
This is the dataset of hagiwara_yukiho/萩原雪歩 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `brown_hair, short_hair, brown_eyes, bob_cut, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 513.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hagiwara_yukiho_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 329.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hagiwara_yukiho_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1092 | 660.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hagiwara_yukiho_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 468.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hagiwara_yukiho_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1092 | 889.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hagiwara_yukiho_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hagiwara_yukiho_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, smile, solo, dress, open_mouth, gloves, hair_ornament, snowflakes, blush, looking_at_viewer |
| 1 | 13 |  |  |  |  |  | 1girl, solo, santa_costume, smile, christmas, blush, open_mouth, gloves, mittens, santa_hat |
| 2 | 10 |  |  |  |  |  | 1girl, coat, solo, blush, smile, snowing, looking_at_viewer, open_mouth, winter_clothes, gloves, scarf |
| 3 | 16 |  |  |  |  |  | 1girl, solo, wings, smile, open_mouth, star_(symbol), collar, microphone, blush, hair_bow, parody, jewelry |
| 4 | 5 |  |  |  |  |  | hair_flower, 1girl, kimono, solo, blush, looking_at_viewer, open_mouth, :d, new_year, obi |
| 5 | 6 |  |  |  |  |  | 1girl, day, solo, cloud, sky, sundress, smile, sun_hat, blush, straw_hat |
| 6 | 11 |  |  |  |  |  | 1girl, solo, medium_breasts, nipples, nude, navel, blush, open_mouth, pussy, looking_at_viewer, simple_background, small_breasts |
| 7 | 7 |  |  |  |  |  | 1girl, navel, solo, white_bikini, blush, looking_at_viewer, medium_breasts, open_mouth, cleavage, cowboy_shot, white_background, sailor_bikini, simple_background, smile |
| 8 | 15 |  |  |  |  |  | 1girl, hetero, penis, solo_focus, 1boy, blush, censored, nipples, cum, nude, large_breasts, open_mouth, oral |
| 9 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, solo, smile, bangs, sleeveless_dress, collarbone, closed_mouth, blue_dress, blush, hair_between_eyes, white_background, bare_shoulders, striped, white_dress |
| 10 | 5 |  |  |  |  |  | 1girl, solo, blush, boots, fishnet_thighhighs, bare_shoulders, bracelet, pink_footwear, sitting, smile, belt, elbow_gloves, fingerless_gloves |
| 11 | 9 |  |  |  |  |  | 1girl, solo, school_uniform, smile, blazer, grey_background, plaid_skirt, socks, looking_at_viewer, simple_background, striped_necktie, upper_body |
| 12 | 6 |  |  |  |  |  | 1girl, maid_headdress, puffy_short_sleeves, solo, looking_at_viewer, pink_bowtie, smile, wrist_cuffs, alternate_costume, blush, frilled_apron, simple_background, waist_apron, white_background, white_shirt, bangs, cowboy_shot, holding_tray, open_mouth, pink_skirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | dress | open_mouth | gloves | hair_ornament | snowflakes | blush | looking_at_viewer | santa_costume | christmas | mittens | santa_hat | coat | snowing | winter_clothes | scarf | wings | star_(symbol) | collar | microphone | hair_bow | parody | jewelry | hair_flower | kimono | :d | new_year | obi | day | cloud | sky | sundress | sun_hat | straw_hat | medium_breasts | nipples | nude | navel | pussy | simple_background | small_breasts | white_bikini | cleavage | cowboy_shot | white_background | sailor_bikini | hetero | penis | solo_focus | 1boy | censored | cum | large_breasts | oral | bangs | sleeveless_dress | collarbone | closed_mouth | blue_dress | hair_between_eyes | bare_shoulders | striped | white_dress | boots | fishnet_thighhighs | bracelet | pink_footwear | sitting | belt | elbow_gloves | fingerless_gloves | school_uniform | blazer | grey_background | plaid_skirt | socks | striped_necktie | upper_body | maid_headdress | puffy_short_sleeves | pink_bowtie | wrist_cuffs | alternate_costume | frilled_apron | waist_apron | white_shirt | holding_tray | pink_skirt |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------|:-------|:--------|:-------------|:---------|:----------------|:-------------|:--------|:--------------------|:----------------|:------------|:----------|:------------|:-------|:----------|:-----------------|:--------|:--------|:----------------|:---------|:-------------|:-----------|:---------|:----------|:--------------|:---------|:-----|:-----------|:------|:------|:--------|:------|:-----------|:----------|:------------|:-----------------|:----------|:-------|:--------|:--------|:--------------------|:----------------|:---------------|:-----------|:--------------|:-------------------|:----------------|:---------|:--------|:-------------|:-------|:-----------|:------|:----------------|:-------|:--------|:-------------------|:-------------|:---------------|:-------------|:--------------------|:-----------------|:----------|:--------------|:--------|:---------------------|:-----------|:----------------|:----------|:-------|:---------------|:--------------------|:-----------------|:---------|:------------------|:--------------|:--------|:------------------|:-------------|:-----------------|:----------------------|:--------------|:--------------|:--------------------|:----------------|:--------------|:--------------|:---------------|:-------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 13 |  |  |  |  |  | X | X | X | | X | X | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | X | | X | X | | | X | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 16 |  |  |  |  |  | X | X | X | | X | | | | X | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | X | | X | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 11 |  |  |  |  |  | X | | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | X | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 15 |  |  |  |  |  | X | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 10 |  |  |  |  |  | X | X | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 11 | 9 |  |  |  |  |  | X | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | |
| 12 | 6 |  |  |  |  |  | X | X | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/hagiwara_yukiho_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T20:31:20+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:27:51+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of hagiwara\_yukiho/萩原雪歩 (THE iDOLM@STER)
=================================================
This is the dataset of hagiwara\_yukiho/萩原雪歩 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, short\_hair, brown\_eyes, bob\_cut, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
9d3253842b0882982a85d3cb5210bc2847983a68 |
# Dataset of shijou_takane/四条貴音/신죠타카네 (THE iDOLM@STER)
This is the dataset of shijou_takane/四条貴音/신죠타카네 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `long_hair, grey_hair, hairband, breasts, purple_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 589.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shijou_takane_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 371.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shijou_takane_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1177 | 753.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shijou_takane_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 535.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shijou_takane_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1177 | 1002.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shijou_takane_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shijou_takane_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, cleavage, dress, medium_breasts, smile, solo, gloves, large_breasts |
| 1 | 12 |  |  |  |  |  | 1girl, smile, solo, open_mouth, blush, looking_at_viewer |
| 2 | 6 |  |  |  |  |  | 1girl, solo, large_breasts, skirt, smile, purple_hairband |
| 3 | 5 |  |  |  |  |  | 1girl, red_eyes, solo, large_breasts, white_hair |
| 4 | 7 |  |  |  |  |  | 1girl, dress, fingerless_gloves, nail_polish, solo, bat_(animal), spider_web_print, hair_ornament, open_mouth, smile, thighhighs, garter_straps, red_eyes |
| 5 | 8 |  |  |  |  |  | 1girl, collared_shirt, looking_at_viewer, neck_ribbon, solo, white_shirt, long_sleeves, simple_background, white_background, black_ribbon, closed_mouth, dress_shirt, upper_body, very_long_hair, blunt_bangs, blush, purple_hairband, red_hairband, suspender_skirt, :d, finger_to_mouth, hair_intakes, high-waist_skirt, index_finger_raised, large_breasts, open_mouth, purple_skirt, red_eyes, shawl, wavy_hair, wing_collar |
| 6 | 11 |  |  |  |  |  | 1girl, kimono, smile, solo, ponytail, blush, obi, hair_flower, looking_at_viewer |
| 7 | 11 |  |  |  |  |  | 1girl, solo, chopsticks, ramen, bowl, eating, smile, red_eyes |
| 8 | 6 |  |  |  |  |  | 1girl, bondage, gagged, solo, arms_behind_back, rope, vibrator, pantyhose, ball_gag, blush, gloves, high_heels, lying, shibari, skirt, tape, tears |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | dress | medium_breasts | smile | solo | gloves | large_breasts | open_mouth | blush | looking_at_viewer | skirt | purple_hairband | red_eyes | white_hair | fingerless_gloves | nail_polish | bat_(animal) | spider_web_print | hair_ornament | thighhighs | garter_straps | collared_shirt | neck_ribbon | white_shirt | long_sleeves | simple_background | white_background | black_ribbon | closed_mouth | dress_shirt | upper_body | very_long_hair | blunt_bangs | red_hairband | suspender_skirt | :d | finger_to_mouth | hair_intakes | high-waist_skirt | index_finger_raised | purple_skirt | shawl | wavy_hair | wing_collar | kimono | ponytail | obi | hair_flower | chopsticks | ramen | bowl | eating | bondage | gagged | arms_behind_back | rope | vibrator | pantyhose | ball_gag | high_heels | lying | shibari | tape | tears |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:-----------------|:--------|:-------|:---------|:----------------|:-------------|:--------|:--------------------|:--------|:------------------|:-----------|:-------------|:--------------------|:--------------|:---------------|:-------------------|:----------------|:-------------|:----------------|:-----------------|:--------------|:--------------|:---------------|:--------------------|:-------------------|:---------------|:---------------|:--------------|:-------------|:-----------------|:--------------|:---------------|:------------------|:-----|:------------------|:---------------|:-------------------|:----------------------|:---------------|:--------|:------------|:--------------|:---------|:-----------|:------|:--------------|:-------------|:--------|:-------|:---------|:----------|:---------|:-------------------|:-------|:-----------|:------------|:-----------|:-------------|:--------|:----------|:-------|:--------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | | | | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | | X | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | | X | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | X | | X | X | | | X | | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | | | | X | | X | X | X | X | | X | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 6 | 11 |  |  |  |  |  | X | | | | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | |
| 7 | 11 |  |  |  |  |  | X | | | | X | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | | | | | X | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/shijou_takane_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T20:31:29+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:17:16+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of shijou\_takane/四条貴音/신죠타카네 (THE iDOLM@STER)
=====================================================
This is the dataset of shijou\_takane/四条貴音/신죠타카네 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'long\_hair, grey\_hair, hairband, breasts, purple\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
e6ec6c08964048db034fe276a7d38016662e17af |
# Dataset Card for Evaluation run of Technoculture/Mediquad-orca-20B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Technoculture/Mediquad-orca-20B](https://huggingface.co/Technoculture/Mediquad-orca-20B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Technoculture__Mediquad-orca-20B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T20:52:04.068553](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__Mediquad-orca-20B/blob/main/results_2024-01-15T20-52-04.068553.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24277591604603688,
"acc_stderr": 0.030362031381414405,
"acc_norm": 0.2439454714476596,
"acc_norm_stderr": 0.031170707455863384,
"mc1": 0.23623011015911874,
"mc1_stderr": 0.014869755015871093,
"mc2": 0.484210555153098,
"mc2_stderr": 0.016831351518748705
},
"harness|arc:challenge|25": {
"acc": 0.2295221843003413,
"acc_stderr": 0.012288926760890776,
"acc_norm": 0.2935153583617747,
"acc_norm_stderr": 0.013307250444941117
},
"harness|hellaswag|10": {
"acc": 0.25473013343955386,
"acc_stderr": 0.0043481894593365355,
"acc_norm": 0.25721967735510853,
"acc_norm_stderr": 0.004362081806560237
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.21,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.21,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.24444444444444444,
"acc_stderr": 0.03712537833614866,
"acc_norm": 0.24444444444444444,
"acc_norm_stderr": 0.03712537833614866
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.18421052631578946,
"acc_stderr": 0.0315469804508223,
"acc_norm": 0.18421052631578946,
"acc_norm_stderr": 0.0315469804508223
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2679245283018868,
"acc_stderr": 0.027257260322494845,
"acc_norm": 0.2679245283018868,
"acc_norm_stderr": 0.027257260322494845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.2,
"acc_stderr": 0.040201512610368445,
"acc_norm": 0.2,
"acc_norm_stderr": 0.040201512610368445
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.17,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.17,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.21965317919075145,
"acc_stderr": 0.031568093627031744,
"acc_norm": 0.21965317919075145,
"acc_norm_stderr": 0.031568093627031744
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.19607843137254902,
"acc_stderr": 0.03950581861179961,
"acc_norm": 0.19607843137254902,
"acc_norm_stderr": 0.03950581861179961
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.24,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.24,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.32340425531914896,
"acc_stderr": 0.030579442773610334,
"acc_norm": 0.32340425531914896,
"acc_norm_stderr": 0.030579442773610334
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2543859649122807,
"acc_stderr": 0.040969851398436716,
"acc_norm": 0.2543859649122807,
"acc_norm_stderr": 0.040969851398436716
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2827586206896552,
"acc_stderr": 0.03752833958003337,
"acc_norm": 0.2827586206896552,
"acc_norm_stderr": 0.03752833958003337
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2566137566137566,
"acc_stderr": 0.022494510767503154,
"acc_norm": 0.2566137566137566,
"acc_norm_stderr": 0.022494510767503154
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.18253968253968253,
"acc_stderr": 0.03455071019102148,
"acc_norm": 0.18253968253968253,
"acc_norm_stderr": 0.03455071019102148
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.27419354838709675,
"acc_stderr": 0.025378139970885196,
"acc_norm": 0.27419354838709675,
"acc_norm_stderr": 0.025378139970885196
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.24630541871921183,
"acc_stderr": 0.030315099285617732,
"acc_norm": 0.24630541871921183,
"acc_norm_stderr": 0.030315099285617732
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24242424242424243,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.24242424242424243,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.21717171717171718,
"acc_stderr": 0.029376616484945637,
"acc_norm": 0.21717171717171718,
"acc_norm_stderr": 0.029376616484945637
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.02869787397186067,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.02869787397186067
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2230769230769231,
"acc_stderr": 0.021107730127243998,
"acc_norm": 0.2230769230769231,
"acc_norm_stderr": 0.021107730127243998
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24444444444444444,
"acc_stderr": 0.02620276653465215,
"acc_norm": 0.24444444444444444,
"acc_norm_stderr": 0.02620276653465215
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.23109243697478993,
"acc_stderr": 0.027381406927868966,
"acc_norm": 0.23109243697478993,
"acc_norm_stderr": 0.027381406927868966
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.1986754966887417,
"acc_stderr": 0.03257847384436775,
"acc_norm": 0.1986754966887417,
"acc_norm_stderr": 0.03257847384436775
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.23669724770642203,
"acc_stderr": 0.01822407811729908,
"acc_norm": 0.23669724770642203,
"acc_norm_stderr": 0.01822407811729908
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.16203703703703703,
"acc_stderr": 0.02513045365226846,
"acc_norm": 0.16203703703703703,
"acc_norm_stderr": 0.02513045365226846
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.029771775228145628,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.029771775228145628
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2616033755274262,
"acc_stderr": 0.028609516716994934,
"acc_norm": 0.2616033755274262,
"acc_norm_stderr": 0.028609516716994934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.336322869955157,
"acc_stderr": 0.031708824268455,
"acc_norm": 0.336322869955157,
"acc_norm_stderr": 0.031708824268455
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.21374045801526717,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.21374045801526717,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.24793388429752067,
"acc_stderr": 0.03941897526516303,
"acc_norm": 0.24793388429752067,
"acc_norm_stderr": 0.03941897526516303
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2962962962962963,
"acc_stderr": 0.04414343666854933,
"acc_norm": 0.2962962962962963,
"acc_norm_stderr": 0.04414343666854933
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.24539877300613497,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.24539877300613497,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.042878587513404544,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.042878587513404544
},
"harness|hendrycksTest-management|5": {
"acc": 0.2524271844660194,
"acc_stderr": 0.04301250399690875,
"acc_norm": 0.2524271844660194,
"acc_norm_stderr": 0.04301250399690875
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2564102564102564,
"acc_stderr": 0.028605953702004253,
"acc_norm": 0.2564102564102564,
"acc_norm_stderr": 0.028605953702004253
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2515964240102171,
"acc_stderr": 0.015517322365529624,
"acc_norm": 0.2515964240102171,
"acc_norm_stderr": 0.015517322365529624
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24277456647398843,
"acc_stderr": 0.023083658586984204,
"acc_norm": 0.24277456647398843,
"acc_norm_stderr": 0.023083658586984204
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574882,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574882
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.22875816993464052,
"acc_stderr": 0.024051029739912258,
"acc_norm": 0.22875816993464052,
"acc_norm_stderr": 0.024051029739912258
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2733118971061093,
"acc_stderr": 0.02531176597542612,
"acc_norm": 0.2733118971061093,
"acc_norm_stderr": 0.02531176597542612
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2654320987654321,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.2654320987654321,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2553191489361702,
"acc_stderr": 0.02601199293090201,
"acc_norm": 0.2553191489361702,
"acc_norm_stderr": 0.02601199293090201
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2392438070404172,
"acc_stderr": 0.010896123652676651,
"acc_norm": 0.2392438070404172,
"acc_norm_stderr": 0.010896123652676651
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.20220588235294118,
"acc_stderr": 0.02439819298665492,
"acc_norm": 0.20220588235294118,
"acc_norm_stderr": 0.02439819298665492
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2565359477124183,
"acc_stderr": 0.01766784161237899,
"acc_norm": 0.2565359477124183,
"acc_norm_stderr": 0.01766784161237899
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.3090909090909091,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.3090909090909091,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.22448979591836735,
"acc_stderr": 0.02671143055553842,
"acc_norm": 0.22448979591836735,
"acc_norm_stderr": 0.02671143055553842
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23880597014925373,
"acc_stderr": 0.030147775935409224,
"acc_norm": 0.23880597014925373,
"acc_norm_stderr": 0.030147775935409224
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3192771084337349,
"acc_stderr": 0.0362933532994786,
"acc_norm": 0.3192771084337349,
"acc_norm_stderr": 0.0362933532994786
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.19883040935672514,
"acc_stderr": 0.03061111655743253,
"acc_norm": 0.19883040935672514,
"acc_norm_stderr": 0.03061111655743253
},
"harness|truthfulqa:mc|0": {
"mc1": 0.23623011015911874,
"mc1_stderr": 0.014869755015871093,
"mc2": 0.484210555153098,
"mc2_stderr": 0.016831351518748705
},
"harness|winogrande|5": {
"acc": 0.48303078137332284,
"acc_stderr": 0.014044390401612976
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Technoculture__Mediquad-orca-20B | [
"region:us"
] | 2024-01-15T20:54:22+00:00 | {"pretty_name": "Evaluation run of Technoculture/Mediquad-orca-20B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Technoculture/Mediquad-orca-20B](https://huggingface.co/Technoculture/Mediquad-orca-20B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Technoculture__Mediquad-orca-20B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T20:52:04.068553](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__Mediquad-orca-20B/blob/main/results_2024-01-15T20-52-04.068553.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24277591604603688,\n \"acc_stderr\": 0.030362031381414405,\n \"acc_norm\": 0.2439454714476596,\n \"acc_norm_stderr\": 0.031170707455863384,\n \"mc1\": 0.23623011015911874,\n \"mc1_stderr\": 0.014869755015871093,\n \"mc2\": 0.484210555153098,\n \"mc2_stderr\": 0.016831351518748705\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.2295221843003413,\n \"acc_stderr\": 0.012288926760890776,\n \"acc_norm\": 0.2935153583617747,\n \"acc_norm_stderr\": 0.013307250444941117\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.25473013343955386,\n \"acc_stderr\": 0.0043481894593365355,\n \"acc_norm\": 0.25721967735510853,\n \"acc_norm_stderr\": 0.004362081806560237\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.24444444444444444,\n \"acc_stderr\": 0.03712537833614866,\n \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.03712537833614866\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.18421052631578946,\n \"acc_stderr\": 0.0315469804508223,\n \"acc_norm\": 0.18421052631578946,\n \"acc_norm_stderr\": 0.0315469804508223\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2679245283018868,\n \"acc_stderr\": 0.027257260322494845,\n \"acc_norm\": 0.2679245283018868,\n \"acc_norm_stderr\": 0.027257260322494845\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.19607843137254902,\n \"acc_stderr\": 0.03950581861179961,\n \"acc_norm\": 0.19607843137254902,\n \"acc_norm_stderr\": 0.03950581861179961\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.32340425531914896,\n \"acc_stderr\": 0.030579442773610334,\n \"acc_norm\": 0.32340425531914896,\n \"acc_norm_stderr\": 0.030579442773610334\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2543859649122807,\n \"acc_stderr\": 0.040969851398436716,\n \"acc_norm\": 0.2543859649122807,\n \"acc_norm_stderr\": 0.040969851398436716\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2827586206896552,\n \"acc_stderr\": 0.03752833958003337,\n \"acc_norm\": 0.2827586206896552,\n \"acc_norm_stderr\": 0.03752833958003337\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2566137566137566,\n \"acc_stderr\": 0.022494510767503154,\n \"acc_norm\": 0.2566137566137566,\n \"acc_norm_stderr\": 0.022494510767503154\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.18253968253968253,\n \"acc_stderr\": 0.03455071019102148,\n \"acc_norm\": 0.18253968253968253,\n \"acc_norm_stderr\": 0.03455071019102148\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.27419354838709675,\n \"acc_stderr\": 0.025378139970885196,\n \"acc_norm\": 0.27419354838709675,\n \"acc_norm_stderr\": 0.025378139970885196\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.24630541871921183,\n \"acc_stderr\": 0.030315099285617732,\n \"acc_norm\": 0.24630541871921183,\n \"acc_norm_stderr\": 0.030315099285617732\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.21717171717171718,\n \"acc_stderr\": 0.029376616484945637,\n \"acc_norm\": 0.21717171717171718,\n \"acc_norm_stderr\": 0.029376616484945637\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.02869787397186067,\n \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.02869787397186067\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2230769230769231,\n \"acc_stderr\": 0.021107730127243998,\n \"acc_norm\": 0.2230769230769231,\n \"acc_norm_stderr\": 0.021107730127243998\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.24444444444444444,\n \"acc_stderr\": 0.02620276653465215,\n \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.02620276653465215\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.23109243697478993,\n \"acc_stderr\": 0.027381406927868966,\n \"acc_norm\": 0.23109243697478993,\n \"acc_norm_stderr\": 0.027381406927868966\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436775,\n \"acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436775\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.23669724770642203,\n \"acc_stderr\": 0.01822407811729908,\n \"acc_norm\": 0.23669724770642203,\n \"acc_norm_stderr\": 0.01822407811729908\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.16203703703703703,\n \"acc_stderr\": 0.02513045365226846,\n \"acc_norm\": 0.16203703703703703,\n \"acc_norm_stderr\": 0.02513045365226846\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.029771775228145628,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.029771775228145628\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2616033755274262,\n \"acc_stderr\": 0.028609516716994934,\n \"acc_norm\": 0.2616033755274262,\n \"acc_norm_stderr\": 0.028609516716994934\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.336322869955157,\n \"acc_stderr\": 0.031708824268455,\n \"acc_norm\": 0.336322869955157,\n \"acc_norm_stderr\": 0.031708824268455\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.21374045801526717,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.21374045801526717,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.24793388429752067,\n \"acc_stderr\": 0.03941897526516303,\n \"acc_norm\": 0.24793388429752067,\n \"acc_norm_stderr\": 0.03941897526516303\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.04414343666854933,\n \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.04414343666854933\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.24539877300613497,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.24539877300613497,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.042878587513404544,\n \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.042878587513404544\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.2524271844660194,\n \"acc_stderr\": 0.04301250399690875,\n \"acc_norm\": 0.2524271844660194,\n \"acc_norm_stderr\": 0.04301250399690875\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2564102564102564,\n \"acc_stderr\": 0.028605953702004253,\n \"acc_norm\": 0.2564102564102564,\n \"acc_norm_stderr\": 0.028605953702004253\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2515964240102171,\n \"acc_stderr\": 0.015517322365529624,\n \"acc_norm\": 0.2515964240102171,\n \"acc_norm_stderr\": 0.015517322365529624\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24277456647398843,\n \"acc_stderr\": 0.023083658586984204,\n \"acc_norm\": 0.24277456647398843,\n \"acc_norm_stderr\": 0.023083658586984204\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574882,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574882\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.22875816993464052,\n \"acc_stderr\": 0.024051029739912258,\n \"acc_norm\": 0.22875816993464052,\n \"acc_norm_stderr\": 0.024051029739912258\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2733118971061093,\n \"acc_stderr\": 0.02531176597542612,\n \"acc_norm\": 0.2733118971061093,\n \"acc_norm_stderr\": 0.02531176597542612\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2654320987654321,\n \"acc_stderr\": 0.024569223600460845,\n \"acc_norm\": 0.2654320987654321,\n \"acc_norm_stderr\": 0.024569223600460845\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2553191489361702,\n \"acc_stderr\": 0.02601199293090201,\n \"acc_norm\": 0.2553191489361702,\n \"acc_norm_stderr\": 0.02601199293090201\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2392438070404172,\n \"acc_stderr\": 0.010896123652676651,\n \"acc_norm\": 0.2392438070404172,\n \"acc_norm_stderr\": 0.010896123652676651\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.20220588235294118,\n \"acc_stderr\": 0.02439819298665492,\n \"acc_norm\": 0.20220588235294118,\n \"acc_norm_stderr\": 0.02439819298665492\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2565359477124183,\n \"acc_stderr\": 0.01766784161237899,\n \"acc_norm\": 0.2565359477124183,\n \"acc_norm_stderr\": 0.01766784161237899\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.3090909090909091,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.3090909090909091,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.22448979591836735,\n \"acc_stderr\": 0.02671143055553842,\n \"acc_norm\": 0.22448979591836735,\n \"acc_norm_stderr\": 0.02671143055553842\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n \"acc_stderr\": 0.030147775935409224,\n \"acc_norm\": 0.23880597014925373,\n \"acc_norm_stderr\": 0.030147775935409224\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3192771084337349,\n \"acc_stderr\": 0.0362933532994786,\n \"acc_norm\": 0.3192771084337349,\n \"acc_norm_stderr\": 0.0362933532994786\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.19883040935672514,\n \"acc_stderr\": 0.03061111655743253,\n \"acc_norm\": 0.19883040935672514,\n \"acc_norm_stderr\": 0.03061111655743253\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23623011015911874,\n \"mc1_stderr\": 0.014869755015871093,\n \"mc2\": 0.484210555153098,\n \"mc2_stderr\": 0.016831351518748705\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.48303078137332284,\n \"acc_stderr\": 0.014044390401612976\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/Technoculture/Mediquad-orca-20B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|arc:challenge|25_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|gsm8k|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hellaswag|10_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T20-52-04.068553.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["**/details_harness|winogrande|5_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T20-52-04.068553.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T20_52_04.068553", "path": ["results_2024-01-15T20-52-04.068553.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T20-52-04.068553.parquet"]}]}]} | 2024-01-15T20:54:42+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Technoculture/Mediquad-orca-20B
Dataset automatically created during the evaluation run of model Technoculture/Mediquad-orca-20B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T20:52:04.068553(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Technoculture/Mediquad-orca-20B\n\n\n\nDataset automatically created during the evaluation run of model Technoculture/Mediquad-orca-20B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T20:52:04.068553(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Technoculture/Mediquad-orca-20B\n\n\n\nDataset automatically created during the evaluation run of model Technoculture/Mediquad-orca-20B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T20:52:04.068553(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
183,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Technoculture/Mediquad-orca-20B\n\n\n\nDataset automatically created during the evaluation run of model Technoculture/Mediquad-orca-20B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T20:52:04.068553(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
97faaf98d7ef21869d176115e669e2a4286513bf |
Data was derived from https://huggingface.co/datasets/facebook/flores
We normalized topics by 1. making them lowercase and 2. removing subcategories ('travel, expenses' -> 'travel'). Afterwards, we dropped every category that contained less than 15 sentences.
The Flores-200 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). | jinaai/flores_clustering | [
"license:cc-by-sa-4.0",
"region:eu"
] | 2024-01-15T21:09:17+00:00 | {"license": "cc-by-sa-4.0", "dataset_info": {"features": [{"name": "sentences", "sequence": "string"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 249084, "num_examples": 1}], "download_size": 154328, "dataset_size": 249084}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2024-01-18T10:39:29+00:00 | [] | [] | TAGS
#license-cc-by-sa-4.0 #region-eu
|
Data was derived from URL
We normalized topics by 1. making them lowercase and 2. removing subcategories ('travel, expenses' -> 'travel'). Afterwards, we dropped every category that contained less than 15 sentences.
The Flores-200 dataset is hosted by the Facebook and licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. | [] | [
"TAGS\n#license-cc-by-sa-4.0 #region-eu \n"
] | [
17
] | [
"passage: TAGS\n#license-cc-by-sa-4.0 #region-eu \n"
] |
3b54640b951e44969e12e141208f3bf1fb7d5bf3 |
# Dataset of futami_mami/双海真美/후타미마미 (THE iDOLM@STER)
This is the dataset of futami_mami/双海真美/후타미마미 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `brown_hair, side_ponytail, brown_eyes, short_hair, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 474.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_mami_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 324.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_mami_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1085 | 636.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_mami_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 436.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_mami_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1085 | 825.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_mami_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/futami_mami_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | open_mouth, smile, 2girls, ;d, one_eye_closed, v, ribbon |
| 1 | 17 |  |  |  |  |  | open_mouth, 2girls, long_hair, :d |
| 2 | 9 |  |  |  |  |  | 2girls, twins, sisters, grin |
| 3 | 5 |  |  |  |  |  | 1girl, navel, open_mouth, shorts, smile, solo, midriff, one_eye_closed, ;d, long_hair, hair_bobbles, looking_at_viewer, striped_thighhighs |
| 4 | 8 |  |  |  |  |  | 1girl, hair_bobbles, solo, smile, open_mouth, cute_&_girly_(idolmaster) |
| 5 | 9 |  |  |  |  |  | 1girl, open_mouth, solo, :d, star_(symbol), skirt, thighhighs |
| 6 | 5 |  |  |  |  |  | navel, smile, blush, one_eye_closed, open_mouth, 1girl, breasts, sailor_bikini, 2girls, ;d, hand_on_hip, looking_at_viewer, solo_focus, white_bikini, yellow_bikini |
| 7 | 5 |  |  |  |  |  | 1girl, hair_bobbles, blush, hoodie, simple_background, white_background, grin, looking_at_viewer, denim_shorts, one_eye_closed, solo_focus, upper_body |
| 8 | 5 |  |  |  |  |  | 1girl, bikini, simple_background, solo, cleavage, looking_at_viewer, medium_breasts, white_background, collarbone, navel, open_mouth, armpits, cow_print, grin, heart, long_hair, star_(symbol) |
| 9 | 6 |  |  |  |  |  | blush, looking_at_viewer, smile, solo_focus, bangs, long_hair, nipples, nude, small_breasts, 2girls, collarbone, navel, 1girl, medium_breasts, scrunchie, sweat |
| 10 | 13 |  |  |  |  |  | looking_at_viewer, smile, 1girl, standing, white_shirt, miniskirt, shiny_hair, star_print, long_hair, open_mouth, pleated_skirt, short_sleeves, white_background, bangs, black_skirt, hair_scrunchie, one_eye_closed, collarbone, jacket, multiple_girls, solo_focus, star_hair_ornament |
| 11 | 5 |  |  |  |  |  | blue_thighhighs, mismatched_legwear, multiple_girls, solo_focus, 1girl, choker, hair_flower, knee_boots, open_mouth, zettai_ryouiki, :d, ^_^, audience, frilled_skirt, grin, idol, microphone, one_eye_closed, red_flower, ribbon, stage |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | open_mouth | smile | 2girls | ;d | one_eye_closed | v | ribbon | long_hair | :d | twins | sisters | grin | 1girl | navel | shorts | solo | midriff | hair_bobbles | looking_at_viewer | striped_thighhighs | cute_&_girly_(idolmaster) | star_(symbol) | skirt | thighhighs | blush | breasts | sailor_bikini | hand_on_hip | solo_focus | white_bikini | yellow_bikini | hoodie | simple_background | white_background | denim_shorts | upper_body | bikini | cleavage | medium_breasts | collarbone | armpits | cow_print | heart | bangs | nipples | nude | small_breasts | scrunchie | sweat | standing | white_shirt | miniskirt | shiny_hair | star_print | pleated_skirt | short_sleeves | black_skirt | hair_scrunchie | jacket | multiple_girls | star_hair_ornament | blue_thighhighs | mismatched_legwear | choker | hair_flower | knee_boots | zettai_ryouiki | ^_^ | audience | frilled_skirt | idol | microphone | red_flower | stage |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------|:--------|:---------|:-----|:-----------------|:----|:---------|:------------|:-----|:--------|:----------|:-------|:--------|:--------|:---------|:-------|:----------|:---------------|:--------------------|:---------------------|:----------------------------|:----------------|:--------|:-------------|:--------|:----------|:----------------|:--------------|:-------------|:---------------|:----------------|:---------|:--------------------|:-------------------|:---------------|:-------------|:---------|:-----------|:-----------------|:-------------|:----------|:------------|:--------|:--------|:----------|:-------|:----------------|:------------|:--------|:-----------|:--------------|:------------|:-------------|:-------------|:----------------|:----------------|:--------------|:-----------------|:---------|:-----------------|:---------------------|:------------------|:---------------------|:---------|:--------------|:-------------|:-----------------|:------|:-----------|:----------------|:-------|:-------------|:-------------|:--------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 17 |  |  |  |  |  | X | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | | X | X | | | X | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | | | | | | | | | | | X | | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | | | | | | | X | | | | X | | | X | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | | | | X | X | | | | | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | | | | | X | | | | | | | X | X | | | | | X | X | | | | | | X | | | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | | | | | X | | | | X | X | X | | X | | | X | | | X | | | | | | | | | | | X | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 6 |  |  |  |  |  | | X | X | | | | | X | | | | | X | X | | | | | X | | | | | | X | | | | X | | | | | | | | | | X | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 13 |  |  |  |  |  | X | X | | | X | | | X | | | | | X | | | | | | X | | | | | | | | | | X | | | | | X | | | | | | X | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 11 | 5 |  |  |  |  |  | X | | | | X | | X | | X | | | X | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/futami_mami_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T21:09:24+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:58:20+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of futami\_mami/双海真美/후타미마미 (THE iDOLM@STER)
===================================================
This is the dataset of futami\_mami/双海真美/후타미마미 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, side\_ponytail, brown\_eyes, short\_hair, hair\_ornament', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
45b7df84cf307b2ac7a8f0ea4b5ac736916ac593 |
# Dataset of akizuki_ritsuko/秋月律子 (THE iDOLM@STER)
This is the dataset of akizuki_ritsuko/秋月律子 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `brown_hair, glasses, brown_eyes, antenna_hair, breasts, braid, twin_braids, folded_ponytail, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 445.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akizuki_ritsuko_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 317.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akizuki_ritsuko_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1064 | 610.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akizuki_ritsuko_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 416.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akizuki_ritsuko_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1064 | 775.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akizuki_ritsuko_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/akizuki_ritsuko_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, solo, smile, open_mouth |
| 1 | 12 |  |  |  |  |  | 1girl, formal, solo, suit, smile, adjusting_eyewear, looking_at_viewer |
| 2 | 6 |  |  |  |  |  | 1girl, crop_top, fingerless_gloves, midriff, navel, necktie, belt, garter_straps, smile, thighhighs, wrist_cuffs, bare_shoulders, boots, fishnets, solo, adjusting_eyewear, short_shorts |
| 3 | 17 |  |  |  |  |  | 1girl, nude, solo, nipples, blush, medium_breasts, navel, pussy, large_breasts |
| 4 | 11 |  |  |  |  |  | 1girl, nipples, hetero, solo_focus, 1boy, censored, large_breasts, blush, nude, open_mouth, penis, female_pubic_hair, cum_in_pussy, cum_on_breasts, vaginal, after_sex, facial, spread_legs, sweat |
| 5 | 5 |  |  |  |  |  | 1girl, cleavage, detached_collar, looking_at_viewer, medium_breasts, playboy_bunny, solo, wrist_cuffs, fake_animal_ears, rabbit_ears, strapless_leotard, black-framed_eyewear, red_bowtie, blush, brown_pantyhose, cowboy_shot, fishnet_pantyhose, large_breasts, long_hair, rabbit_tail, semi-rimless_eyewear, smile, standing, sweat, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | open_mouth | formal | suit | adjusting_eyewear | looking_at_viewer | crop_top | fingerless_gloves | midriff | navel | necktie | belt | garter_straps | thighhighs | wrist_cuffs | bare_shoulders | boots | fishnets | short_shorts | nude | nipples | blush | medium_breasts | pussy | large_breasts | hetero | solo_focus | 1boy | censored | penis | female_pubic_hair | cum_in_pussy | cum_on_breasts | vaginal | after_sex | facial | spread_legs | sweat | cleavage | detached_collar | playboy_bunny | fake_animal_ears | rabbit_ears | strapless_leotard | black-framed_eyewear | red_bowtie | brown_pantyhose | cowboy_shot | fishnet_pantyhose | long_hair | rabbit_tail | semi-rimless_eyewear | standing | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-------------|:---------|:-------|:--------------------|:--------------------|:-----------|:--------------------|:----------|:--------|:----------|:-------|:----------------|:-------------|:--------------|:-----------------|:--------|:-----------|:---------------|:-------|:----------|:--------|:-----------------|:--------|:----------------|:---------|:-------------|:-------|:-----------|:--------|:--------------------|:---------------|:-----------------|:----------|:------------|:---------|:--------------|:--------|:-----------|:------------------|:----------------|:-------------------|:--------------|:--------------------|:-----------------------|:-------------|:------------------|:--------------|:--------------------|:------------|:--------------|:-----------------------|:-----------|:-------------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 17 |  |  |  |  |  | X | X | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 11 |  |  |  |  |  | X | | | X | | | | | | | | | | | | | | | | | | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | X | | | | | X | | | | | | | | | X | | | | | | | X | X | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/akizuki_ritsuko_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T21:09:28+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:59:13+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of akizuki\_ritsuko/秋月律子 (THE iDOLM@STER)
=================================================
This is the dataset of akizuki\_ritsuko/秋月律子 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, glasses, brown\_eyes, antenna\_hair, breasts, braid, twin\_braids, folded\_ponytail, short\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
8d759727e808272d80d58499c5043a51e7687cff |
# Dataset of miura_azusa/三浦あずさ (THE iDOLM@STER)
This is the dataset of miura_azusa/三浦あずさ (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `blue_hair, ahoge, short_hair, breasts, red_eyes, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 531.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miura_azusa_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 336.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miura_azusa_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1198 | 684.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miura_azusa_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 483.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miura_azusa_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1198 | 917.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miura_azusa_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/miura_azusa_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, elbow_gloves, long_hair, solo, open_mouth, smile, bride, closed_eyes, wedding_dress |
| 1 | 6 |  |  |  |  |  | 1girl, bare_shoulders, elbow_gloves, solo, wedding_dress, blush, bridal_veil, cleavage, flower, long_hair, necklace, crown, open_mouth, :d, bouquet |
| 2 | 5 |  |  |  |  |  | 1girl, necklace, open_mouth, smile, solo, wedding_dress, petals, bare_shoulders, blush, hand_on_own_cheek, ^_^, bouquet, cleavage, elbow_gloves, hair_flower, rose, veil |
| 3 | 7 |  |  |  |  |  | 1girl, necklace, smile, solo, bracelet, dress, looking_at_viewer, purple_eyes |
| 4 | 6 |  |  |  |  |  | 1girl, blush, solo, open_mouth, :d, apron, looking_at_viewer |
| 5 | 5 |  |  |  |  |  | 1girl, long_hair, sleeveless_turtleneck, solo, skirt, smile, bare_shoulders, open_mouth |
| 6 | 6 |  |  |  |  |  | 1girl, cleavage, open_mouth, smile, solo, navel, side-tie_bikini_bottom, underboob, white_bikini |
| 7 | 7 |  |  |  |  |  | 1girl, day, smile, beach, cleavage, navel, ocean, solo, looking_at_viewer, outdoors, white_bikini, cloud, sky, side-tie_bikini_bottom |
| 8 | 6 |  |  |  |  |  | 1girl, hair_flower, hibiscus, solo, long_hair, cleavage, smile, day, side-tie_bikini_bottom, white_bikini |
| 9 | 5 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, open_mouth, solo, white_dress, detached_sleeves, smile, blush, dated, character_name, happy_birthday, simple_background, white_background |
| 10 | 5 |  |  |  |  |  | 1girl, heart, looking_at_viewer, open_mouth, solo, :d, blush, parted_bangs, puffy_short_sleeves, white_gloves, wrist_cuffs, beret, detached_collar, frilled_dress, sparkle, standing, bowtie, cleavage, hands_up, hat_bow, medium_breasts, neck_ribbon, pink_dress, purple_eyes, striped_bow, white_dress |
| 11 | 6 |  |  |  |  |  | 1girl, crop_top, looking_at_viewer, midriff, navel, solo, wrist_cuffs, blush, mini_hat, open_mouth, short_sleeves, detached_collar, purple_skirt, stomach, :d, collarbone, miniskirt, necktie, puffy_sleeves, standing, strapless |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | elbow_gloves | long_hair | solo | open_mouth | smile | bride | closed_eyes | wedding_dress | bare_shoulders | blush | bridal_veil | cleavage | flower | necklace | crown | :d | bouquet | petals | hand_on_own_cheek | ^_^ | hair_flower | rose | veil | bracelet | dress | looking_at_viewer | purple_eyes | apron | sleeveless_turtleneck | skirt | navel | side-tie_bikini_bottom | underboob | white_bikini | day | beach | ocean | outdoors | cloud | sky | hibiscus | white_dress | detached_sleeves | dated | character_name | happy_birthday | simple_background | white_background | heart | parted_bangs | puffy_short_sleeves | white_gloves | wrist_cuffs | beret | detached_collar | frilled_dress | sparkle | standing | bowtie | hands_up | hat_bow | medium_breasts | neck_ribbon | pink_dress | striped_bow | crop_top | midriff | mini_hat | short_sleeves | purple_skirt | stomach | collarbone | miniskirt | necktie | puffy_sleeves | strapless |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------------|:------------|:-------|:-------------|:--------|:--------|:--------------|:----------------|:-----------------|:--------|:--------------|:-----------|:---------|:-----------|:--------|:-----|:----------|:---------|:--------------------|:------|:--------------|:-------|:-------|:-----------|:--------|:--------------------|:--------------|:--------|:------------------------|:--------|:--------|:-------------------------|:------------|:---------------|:------|:--------|:--------|:-----------|:--------|:------|:-----------|:--------------|:-------------------|:--------|:-----------------|:-----------------|:--------------------|:-------------------|:--------|:---------------|:----------------------|:---------------|:--------------|:--------|:------------------|:----------------|:----------|:-----------|:---------|:-----------|:----------|:-----------------|:--------------|:-------------|:--------------|:-----------|:----------|:-----------|:----------------|:---------------|:----------|:-------------|:------------|:----------|:----------------|:------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | X | X | X | | | X | X | X | | X | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | | X | | X | | | | | | | | | X | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | | X | X | | | | | | X | | | | | | X | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | X | X | X | X | | | | X | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | X | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | | X | | X | | | | | | | X | | | | | | | | | | | | | | X | | | | | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | | X | X | | X | | | | | | | X | | | | | | | | | X | | | | | | | | | | | X | | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | X | | | X | X | X | | | | X | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | | | X | X | | | | | | X | | X | | | | X | | | | | | | | | | X | X | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 11 | 6 |  |  |  |  |  | X | | | X | X | | | | | | X | | | | | | X | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | X | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/miura_azusa_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T21:09:30+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:43:38+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of miura\_azusa/三浦あずさ (THE iDOLM@STER)
==============================================
This is the dataset of miura\_azusa/三浦あずさ (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'blue\_hair, ahoge, short\_hair, breasts, red\_eyes, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
5b8e72b747ad2337e938e0a1125ef27222172471 |
# Dataset Card for Evaluation run of Swisslex/Mixtral-Orca-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Swisslex/Mixtral-Orca-v0.1](https://huggingface.co/Swisslex/Mixtral-Orca-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Swisslex__Mixtral-Orca-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T21:13:17.458135](https://huggingface.co/datasets/open-llm-leaderboard/details_Swisslex__Mixtral-Orca-v0.1/blob/main/results_2024-01-15T21-13-17.458135.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6595098527970858,
"acc_stderr": 0.03195828654224696,
"acc_norm": 0.6650394576540025,
"acc_norm_stderr": 0.0326025270767851,
"mc1": 0.4602203182374541,
"mc1_stderr": 0.017448017223960877,
"mc2": 0.6385028598239161,
"mc2_stderr": 0.01556463307062193
},
"harness|arc:challenge|25": {
"acc": 0.6774744027303754,
"acc_stderr": 0.013659980894277371,
"acc_norm": 0.697098976109215,
"acc_norm_stderr": 0.013428241573185349
},
"harness|hellaswag|10": {
"acc": 0.7167894841665007,
"acc_stderr": 0.0044963697421321076,
"acc_norm": 0.8887671778530173,
"acc_norm_stderr": 0.003137776444277206
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595853,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595853
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.75,
"acc_stderr": 0.03523807393012047,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03523807393012047
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7283018867924528,
"acc_stderr": 0.027377706624670713,
"acc_norm": 0.7283018867924528,
"acc_norm_stderr": 0.027377706624670713
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.03586879280080341,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.03586879280080341
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.56,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6212765957446809,
"acc_stderr": 0.03170995606040655,
"acc_norm": 0.6212765957446809,
"acc_norm_stderr": 0.03170995606040655
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5263157894736842,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.5263157894736842,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6068965517241379,
"acc_stderr": 0.040703290137070705,
"acc_norm": 0.6068965517241379,
"acc_norm_stderr": 0.040703290137070705
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.025487187147859375,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.025487187147859375
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356852,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356852
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.541871921182266,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.541871921182266,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.0274796030105388,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.0274796030105388
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6538461538461539,
"acc_stderr": 0.024121125416941183,
"acc_norm": 0.6538461538461539,
"acc_norm_stderr": 0.024121125416941183
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251972,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251972
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.02934457250063435,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.02934457250063435
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3973509933774834,
"acc_stderr": 0.039955240076816806,
"acc_norm": 0.3973509933774834,
"acc_norm_stderr": 0.039955240076816806
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8568807339449541,
"acc_stderr": 0.015014462497168585,
"acc_norm": 0.8568807339449541,
"acc_norm_stderr": 0.015014462497168585
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.03395322726375797,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.03395322726375797
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.02732547096671632,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.02732547096671632
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.026750826994676177,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.026750826994676177
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7219730941704036,
"acc_stderr": 0.030069584874494043,
"acc_norm": 0.7219730941704036,
"acc_norm_stderr": 0.030069584874494043
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8091603053435115,
"acc_stderr": 0.03446513350752598,
"acc_norm": 0.8091603053435115,
"acc_norm_stderr": 0.03446513350752598
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990946,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990946
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6932515337423313,
"acc_stderr": 0.03623089915724147,
"acc_norm": 0.6932515337423313,
"acc_norm_stderr": 0.03623089915724147
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.04726835553719099,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.04726835553719099
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.02126271940040698,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.02126271940040698
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8492975734355045,
"acc_stderr": 0.012793420883120807,
"acc_norm": 0.8492975734355045,
"acc_norm_stderr": 0.012793420883120807
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4793296089385475,
"acc_stderr": 0.016708205559996137,
"acc_norm": 0.4793296089385475,
"acc_norm_stderr": 0.016708205559996137
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.024404394928087877,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.024404394928087877
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7620578778135049,
"acc_stderr": 0.024185150647818707,
"acc_norm": 0.7620578778135049,
"acc_norm_stderr": 0.024185150647818707
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7746913580246914,
"acc_stderr": 0.02324620264781975,
"acc_norm": 0.7746913580246914,
"acc_norm_stderr": 0.02324620264781975
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4954367666232073,
"acc_stderr": 0.012769704263117526,
"acc_norm": 0.4954367666232073,
"acc_norm_stderr": 0.012769704263117526
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7610294117647058,
"acc_stderr": 0.02590528064489301,
"acc_norm": 0.7610294117647058,
"acc_norm_stderr": 0.02590528064489301
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.018824219512706207,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.018824219512706207
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7020408163265306,
"acc_stderr": 0.029279567411065677,
"acc_norm": 0.7020408163265306,
"acc_norm_stderr": 0.029279567411065677
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.025196929874827075,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.025196929874827075
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5,
"acc_stderr": 0.03892494720807614,
"acc_norm": 0.5,
"acc_norm_stderr": 0.03892494720807614
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8596491228070176,
"acc_stderr": 0.026640582539133203,
"acc_norm": 0.8596491228070176,
"acc_norm_stderr": 0.026640582539133203
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4602203182374541,
"mc1_stderr": 0.017448017223960877,
"mc2": 0.6385028598239161,
"mc2_stderr": 0.01556463307062193
},
"harness|winogrande|5": {
"acc": 0.8113654301499605,
"acc_stderr": 0.010995172318019806
},
"harness|gsm8k|5": {
"acc": 0.3730098559514784,
"acc_stderr": 0.013320876609777215
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Swisslex__Mixtral-Orca-v0.1 | [
"region:us"
] | 2024-01-15T21:15:32+00:00 | {"pretty_name": "Evaluation run of Swisslex/Mixtral-Orca-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [Swisslex/Mixtral-Orca-v0.1](https://huggingface.co/Swisslex/Mixtral-Orca-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Swisslex__Mixtral-Orca-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T21:13:17.458135](https://huggingface.co/datasets/open-llm-leaderboard/details_Swisslex__Mixtral-Orca-v0.1/blob/main/results_2024-01-15T21-13-17.458135.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6595098527970858,\n \"acc_stderr\": 0.03195828654224696,\n \"acc_norm\": 0.6650394576540025,\n \"acc_norm_stderr\": 0.0326025270767851,\n \"mc1\": 0.4602203182374541,\n \"mc1_stderr\": 0.017448017223960877,\n \"mc2\": 0.6385028598239161,\n \"mc2_stderr\": 0.01556463307062193\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6774744027303754,\n \"acc_stderr\": 0.013659980894277371,\n \"acc_norm\": 0.697098976109215,\n \"acc_norm_stderr\": 0.013428241573185349\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7167894841665007,\n \"acc_stderr\": 0.0044963697421321076,\n \"acc_norm\": 0.8887671778530173,\n \"acc_norm_stderr\": 0.003137776444277206\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03523807393012047,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03523807393012047\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7283018867924528,\n \"acc_stderr\": 0.027377706624670713,\n \"acc_norm\": 0.7283018867924528,\n \"acc_norm_stderr\": 0.027377706624670713\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.03586879280080341,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.03586879280080341\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107224,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107224\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6212765957446809,\n \"acc_stderr\": 0.03170995606040655,\n \"acc_norm\": 0.6212765957446809,\n \"acc_norm_stderr\": 0.03170995606040655\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5263157894736842,\n \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.5263157894736842,\n \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6068965517241379,\n \"acc_stderr\": 0.040703290137070705,\n \"acc_norm\": 0.6068965517241379,\n \"acc_norm_stderr\": 0.040703290137070705\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.025487187147859375,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.025487187147859375\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356852,\n \"acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356852\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.541871921182266,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.541871921182266,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.0274796030105388,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.0274796030105388\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6538461538461539,\n \"acc_stderr\": 0.024121125416941183,\n \"acc_norm\": 0.6538461538461539,\n \"acc_norm_stderr\": 0.024121125416941183\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251972,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251972\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.02934457250063435,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.02934457250063435\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3973509933774834,\n \"acc_stderr\": 0.039955240076816806,\n \"acc_norm\": 0.3973509933774834,\n \"acc_norm_stderr\": 0.039955240076816806\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8568807339449541,\n \"acc_stderr\": 0.015014462497168585,\n \"acc_norm\": 0.8568807339449541,\n \"acc_norm_stderr\": 0.015014462497168585\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5462962962962963,\n \"acc_stderr\": 0.03395322726375797,\n \"acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.03395322726375797\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.02732547096671632,\n \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.02732547096671632\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676177,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676177\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7219730941704036,\n \"acc_stderr\": 0.030069584874494043,\n \"acc_norm\": 0.7219730941704036,\n \"acc_norm_stderr\": 0.030069584874494043\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8091603053435115,\n \"acc_stderr\": 0.03446513350752598,\n \"acc_norm\": 0.8091603053435115,\n \"acc_norm_stderr\": 0.03446513350752598\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6932515337423313,\n \"acc_stderr\": 0.03623089915724147,\n \"acc_norm\": 0.6932515337423313,\n \"acc_norm_stderr\": 0.03623089915724147\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.02126271940040698,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.02126271940040698\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8492975734355045,\n \"acc_stderr\": 0.012793420883120807,\n \"acc_norm\": 0.8492975734355045,\n \"acc_norm_stderr\": 0.012793420883120807\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4793296089385475,\n \"acc_stderr\": 0.016708205559996137,\n \"acc_norm\": 0.4793296089385475,\n \"acc_norm_stderr\": 0.016708205559996137\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.024404394928087877,\n \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.024404394928087877\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7620578778135049,\n \"acc_stderr\": 0.024185150647818707,\n \"acc_norm\": 0.7620578778135049,\n \"acc_norm_stderr\": 0.024185150647818707\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7746913580246914,\n \"acc_stderr\": 0.02324620264781975,\n \"acc_norm\": 0.7746913580246914,\n \"acc_norm_stderr\": 0.02324620264781975\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4954367666232073,\n \"acc_stderr\": 0.012769704263117526,\n \"acc_norm\": 0.4954367666232073,\n \"acc_norm_stderr\": 0.012769704263117526\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7610294117647058,\n \"acc_stderr\": 0.02590528064489301,\n \"acc_norm\": 0.7610294117647058,\n \"acc_norm_stderr\": 0.02590528064489301\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706207,\n \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706207\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7020408163265306,\n \"acc_stderr\": 0.029279567411065677,\n \"acc_norm\": 0.7020408163265306,\n \"acc_norm_stderr\": 0.029279567411065677\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.025196929874827075,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.025196929874827075\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.03892494720807614,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.03892494720807614\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8596491228070176,\n \"acc_stderr\": 0.026640582539133203,\n \"acc_norm\": 0.8596491228070176,\n \"acc_norm_stderr\": 0.026640582539133203\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4602203182374541,\n \"mc1_stderr\": 0.017448017223960877,\n \"mc2\": 0.6385028598239161,\n \"mc2_stderr\": 0.01556463307062193\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8113654301499605,\n \"acc_stderr\": 0.010995172318019806\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3730098559514784,\n \"acc_stderr\": 0.013320876609777215\n }\n}\n```", "repo_url": "https://huggingface.co/Swisslex/Mixtral-Orca-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|arc:challenge|25_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|gsm8k|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hellaswag|10_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T21-13-17.458135.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["**/details_harness|winogrande|5_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T21-13-17.458135.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T21_13_17.458135", "path": ["results_2024-01-15T21-13-17.458135.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T21-13-17.458135.parquet"]}]}]} | 2024-01-15T21:15:54+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Swisslex/Mixtral-Orca-v0.1
Dataset automatically created during the evaluation run of model Swisslex/Mixtral-Orca-v0.1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T21:13:17.458135(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Swisslex/Mixtral-Orca-v0.1\n\n\n\nDataset automatically created during the evaluation run of model Swisslex/Mixtral-Orca-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T21:13:17.458135(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Swisslex/Mixtral-Orca-v0.1\n\n\n\nDataset automatically created during the evaluation run of model Swisslex/Mixtral-Orca-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T21:13:17.458135(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
185,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Swisslex/Mixtral-Orca-v0.1\n\n\n\nDataset automatically created during the evaluation run of model Swisslex/Mixtral-Orca-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T21:13:17.458135(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
f98b77c80537a6a5681d42da2b45e64a9320b700 |
# Anim-400K: A dataset designed from the ground up for automated dubbing of video

# What is Anim-400K?
Anim-400K is a large-scale dataset of aligned audio-video clips in both the English and Japanese languages. It is comprised of over 425K aligned clips (763 hours) consisting of both video and audio drawn from over 190 properties covering hundreds of themes and genres. Anim400K is further augmented with metadata including genres, themes, show-ratings, character profiles, and animation styles at a property level, episode synopses, ratings, and subtitles at an episode level, and pre-computed ASR at an aligned clip level to enable in-depth research into several audio-visual tasks.
Read the [[ArXiv Preprint](https://arxiv.org/abs/2401.05314)]
Check us out on [[Github](https://github.com/DavidMChan/Anim400K/)]
# News
**[January 2024]** Anim-400K (v1) available on [Huggingface Datasets](https://huggingface.co/datasets/davidchan/anim400k/). </br>
**[January 2024]** Anim-400K (v1) release. </br>
**[January 2024]** Anim-400K (v1) accepted at ICASSP2024. </br>
# Citation
If any part of our paper is helpful to your work, please cite with:
```
@inproceedings{cai2024anim400k,
title={ANIM-400K: A Large-Scale Dataset for Automated End to End Dubbing of Video},
author={Cai, Kevin and Liu, Chonghua and Chan, David M.},
booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={1--5},
year={2024},
organization={IEEE}
}
```
# Acknowledgements
This repository, and data release model is modeled on that used by the [MAD](https://github.com/Soldelli/MAD) dataset.
| davidchan/anim400k | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"task_categories:audio-to-audio",
"task_categories:audio-classification",
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_categories:video-classification",
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"language:ja",
"arxiv:2401.05314",
"region:us"
] | 2024-01-15T21:19:20+00:00 | {"language": ["en", "ja"], "size_categories": ["100K<n<1M"], "task_categories": ["text-to-speech", "automatic-speech-recognition", "audio-to-audio", "audio-classification", "text-classification", "text2text-generation", "video-classification", "summarization"], "pretty_name": "Anim-400K", "extra_gated_prompt": "Due to copyright limitations, all prospect users of the Anim400K Datasets must sign both a Terms of Use Agreement (TOU) and a Non Disclosure Agreement (NDA). The form MUST be filled out for all users of the dataset. The answers to this form will auto-complete and sign the template TOU (https://docs.google.com/document/d/1MNAU12i4XIXj8O8ThUuep8jJw5WOGArCvZdcR2UmJGM/edit?usp=sharing) and NDA (https://docs.google.com/document/d/1cLtFX2GarMEzZn5RwL-gEuAthS1qaKOfb3r5m1icOX0/edit?usp=sharing)", "extra_gated_fields": {"Full Name": "text", "Email": "text", "Researcher Google Scholar Page": "text", "Affiliation": "text", "Name of Principal Investigator or Supervisor": "text", "Principle Investigator/Supervisior Google Scholar Page": "text", "Purpose of Intended Use": "text", "I understand that the Anim400K team and the Regents of the University of California make no warranties, express or implied, regarding the Dataset, including but not limited to being up-to-\u00addate, correct or complete\u2024 \u200bNeither the Anim400K team nor the Regents of the University of California can be held liable for providing access to the Dataset \u200bor usage of the \u200bDataset": "checkbox", "I understand and agree that the use of this Dataset is for scientific or research purposes only\u2024 Any other use is explicitly prohibited": "checkbox", "I understand and agree that this Dataset and the videos are protected by copyrights": "checkbox", "I understand and agree not to share the dataset with any third party": "checkbox", "I understand and agree that I (as the researcher) takes full responsibility for usage of the Dataset and processing the Dataset": "checkbox", "I have read, and I agree and sign the Non Disclosure Agreement": "checkbox", "I have read, and I agree and sign the Terms of Use Agreement": "checkbox"}} | 2024-01-30T03:46:15+00:00 | [
"2401.05314"
] | [
"en",
"ja"
] | TAGS
#task_categories-text-to-speech #task_categories-automatic-speech-recognition #task_categories-audio-to-audio #task_categories-audio-classification #task_categories-text-classification #task_categories-text2text-generation #task_categories-video-classification #task_categories-summarization #size_categories-100K<n<1M #language-English #language-Japanese #arxiv-2401.05314 #region-us
|
# Anim-400K: A dataset designed from the ground up for automated dubbing of video
!image
# What is Anim-400K?
Anim-400K is a large-scale dataset of aligned audio-video clips in both the English and Japanese languages. It is comprised of over 425K aligned clips (763 hours) consisting of both video and audio drawn from over 190 properties covering hundreds of themes and genres. Anim400K is further augmented with metadata including genres, themes, show-ratings, character profiles, and animation styles at a property level, episode synopses, ratings, and subtitles at an episode level, and pre-computed ASR at an aligned clip level to enable in-depth research into several audio-visual tasks.
Read the [ArXiv Preprint]
Check us out on [Github]
# News
[January 2024] Anim-400K (v1) available on Huggingface Datasets. </br>
[January 2024] Anim-400K (v1) release. </br>
[January 2024] Anim-400K (v1) accepted at ICASSP2024. </br>
If any part of our paper is helpful to your work, please cite with:
# Acknowledgements
This repository, and data release model is modeled on that used by the MAD dataset.
| [
"# Anim-400K: A dataset designed from the ground up for automated dubbing of video\n\n\n!image",
"# What is Anim-400K?\n\nAnim-400K is a large-scale dataset of aligned audio-video clips in both the English and Japanese languages. It is comprised of over 425K aligned clips (763 hours) consisting of both video and audio drawn from over 190 properties covering hundreds of themes and genres. Anim400K is further augmented with metadata including genres, themes, show-ratings, character profiles, and animation styles at a property level, episode synopses, ratings, and subtitles at an episode level, and pre-computed ASR at an aligned clip level to enable in-depth research into several audio-visual tasks.\n\nRead the [ArXiv Preprint]\n\nCheck us out on [Github]",
"# News\n[January 2024] Anim-400K (v1) available on Huggingface Datasets. </br>\n[January 2024] Anim-400K (v1) release. </br>\n[January 2024] Anim-400K (v1) accepted at ICASSP2024. </br>\n\n\nIf any part of our paper is helpful to your work, please cite with:",
"# Acknowledgements\n\nThis repository, and data release model is modeled on that used by the MAD dataset."
] | [
"TAGS\n#task_categories-text-to-speech #task_categories-automatic-speech-recognition #task_categories-audio-to-audio #task_categories-audio-classification #task_categories-text-classification #task_categories-text2text-generation #task_categories-video-classification #task_categories-summarization #size_categories-100K<n<1M #language-English #language-Japanese #arxiv-2401.05314 #region-us \n",
"# Anim-400K: A dataset designed from the ground up for automated dubbing of video\n\n\n!image",
"# What is Anim-400K?\n\nAnim-400K is a large-scale dataset of aligned audio-video clips in both the English and Japanese languages. It is comprised of over 425K aligned clips (763 hours) consisting of both video and audio drawn from over 190 properties covering hundreds of themes and genres. Anim400K is further augmented with metadata including genres, themes, show-ratings, character profiles, and animation styles at a property level, episode synopses, ratings, and subtitles at an episode level, and pre-computed ASR at an aligned clip level to enable in-depth research into several audio-visual tasks.\n\nRead the [ArXiv Preprint]\n\nCheck us out on [Github]",
"# News\n[January 2024] Anim-400K (v1) available on Huggingface Datasets. </br>\n[January 2024] Anim-400K (v1) release. </br>\n[January 2024] Anim-400K (v1) accepted at ICASSP2024. </br>\n\n\nIf any part of our paper is helpful to your work, please cite with:",
"# Acknowledgements\n\nThis repository, and data release model is modeled on that used by the MAD dataset."
] | [
137,
23,
181,
91,
26
] | [
"passage: TAGS\n#task_categories-text-to-speech #task_categories-automatic-speech-recognition #task_categories-audio-to-audio #task_categories-audio-classification #task_categories-text-classification #task_categories-text2text-generation #task_categories-video-classification #task_categories-summarization #size_categories-100K<n<1M #language-English #language-Japanese #arxiv-2401.05314 #region-us \n# Anim-400K: A dataset designed from the ground up for automated dubbing of video\n\n\n!image# What is Anim-400K?\n\nAnim-400K is a large-scale dataset of aligned audio-video clips in both the English and Japanese languages. It is comprised of over 425K aligned clips (763 hours) consisting of both video and audio drawn from over 190 properties covering hundreds of themes and genres. Anim400K is further augmented with metadata including genres, themes, show-ratings, character profiles, and animation styles at a property level, episode synopses, ratings, and subtitles at an episode level, and pre-computed ASR at an aligned clip level to enable in-depth research into several audio-visual tasks.\n\nRead the [ArXiv Preprint]\n\nCheck us out on [Github]# News\n[January 2024] Anim-400K (v1) available on Huggingface Datasets. </br>\n[January 2024] Anim-400K (v1) release. </br>\n[January 2024] Anim-400K (v1) accepted at ICASSP2024. </br>\n\n\nIf any part of our paper is helpful to your work, please cite with:# Acknowledgements\n\nThis repository, and data release model is modeled on that used by the MAD dataset."
] |
4c3ea5f9a77243f72d901ef9d201170d897774d2 |
# Dataset of futami_ami/双海亜美/후타미아미 (THE iDOLM@STER)
This is the dataset of futami_ami/双海亜美/후타미아미 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `brown_hair, side_ponytail, brown_eyes, short_hair, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 386.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_ami_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 282.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_ami_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 964 | 525.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_ami_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 362.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_ami_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 964 | 660.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futami_ami_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/futami_ami_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1boy, 1girl, hetero, penis, pussy, solo_focus, vaginal, blush, mosaic_censoring, navel, open_mouth, girl_on_top, happy_sex, nipples, petite, small_breasts, smile, spread_legs, cowgirl_position, long_legs, looking_at_viewer, m_legs, nude, sweat, cum, pov |
| 1 | 7 |  |  |  |  |  | 2girls, :d, open_mouth, solo_focus |
| 2 | 6 |  |  |  |  |  | 1girl, solo, grin, hair_bobbles, looking_at_viewer |
| 3 | 12 |  |  |  |  |  | 2girls, smile, twins, sisters, open_mouth, long_hair |
| 4 | 10 |  |  |  |  |  | looking_at_viewer, pleated_skirt, white_shirt, 1girl, miniskirt, open_mouth, solo, :d, hair_scrunchie, school_uniform, standing, white_background, bangs, collarbone, long_sleeves, short_sleeves, simple_background, black_skirt, kneehighs, long_hair, multiple_girls, sailor_collar, shiny_hair, shoes |
| 5 | 5 |  |  |  |  |  | 1girl, blush, 1boy, hetero, open_mouth, saliva, tongue_out, fellatio, hair_bobbles, penis, sweat, bulge, mosaic_censoring, solo_focus, white_panties |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1boy | 1girl | hetero | penis | pussy | solo_focus | vaginal | blush | mosaic_censoring | navel | open_mouth | girl_on_top | happy_sex | nipples | petite | small_breasts | smile | spread_legs | cowgirl_position | long_legs | looking_at_viewer | m_legs | nude | sweat | cum | pov | 2girls | :d | solo | grin | hair_bobbles | twins | sisters | long_hair | pleated_skirt | white_shirt | miniskirt | hair_scrunchie | school_uniform | standing | white_background | bangs | collarbone | long_sleeves | short_sleeves | simple_background | black_skirt | kneehighs | multiple_girls | sailor_collar | shiny_hair | shoes | saliva | tongue_out | fellatio | bulge | white_panties |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------|:--------|:---------|:--------|:--------|:-------------|:----------|:--------|:-------------------|:--------|:-------------|:--------------|:------------|:----------|:---------|:----------------|:--------|:--------------|:-------------------|:------------|:--------------------|:---------|:-------|:--------|:------|:------|:---------|:-----|:-------|:-------|:---------------|:--------|:----------|:------------|:----------------|:--------------|:------------|:-----------------|:-----------------|:-----------|:-------------------|:--------|:-------------|:---------------|:----------------|:--------------------|:--------------|:------------|:-----------------|:----------------|:-------------|:--------|:---------|:-------------|:-----------|:--------|:----------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | | | | | | X | | | | | X | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 12 |  |  |  |  |  | | | | | | | | | | | X | | | | | | X | | | | | | | | | | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | | X | | | | | | | | | X | | | | | | | | | | X | | | | | | | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | X | X | | X | | X | X | | X | | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X |
| CyberHarem/futami_ami_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T21:28:57+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T23:14:00+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of futami\_ami/双海亜美/후타미아미 (THE iDOLM@STER)
==================================================
This is the dataset of futami\_ami/双海亜美/후타미아미 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, side\_ponytail, brown\_eyes, short\_hair, hair\_ornament', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
a8191f45ec0a44a083545b39771b9fabd9870bd9 |
# Dataset of otonashi_kotori/音無小鳥/오토나시코토리 (THE iDOLM@STER)
This is the dataset of otonashi_kotori/音無小鳥/오토나시코토리 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are `green_hair, short_hair, mole_under_mouth, mole, hairband, brown_eyes, breasts, red_eyes, yellow_hairband`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 423.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/otonashi_kotori_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 299.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/otonashi_kotori_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1095 | 591.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/otonashi_kotori_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 394.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/otonashi_kotori_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1095 | 740.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/otonashi_kotori_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/otonashi_kotori_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, blush, solo, zettai_ryouiki, pencil_skirt, black_thighhighs, open_mouth, headset, smile, one_eye_closed |
| 1 | 13 |  |  |  |  |  | 1girl, black_thighhighs, solo, smile, bow, blush, looking_at_viewer, pencil_skirt, open_mouth, zettai_ryouiki, vest |
| 2 | 11 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, open_mouth, pencil_skirt, solo, white_shirt, black_skirt, black_thighhighs, green_vest, :d, bangs, yellow_bowtie, zettai_ryouiki, blush, collared_shirt, standing, dress_shirt, miniskirt, simple_background, full_body, white_background, sandals |
| 3 | 6 |  |  |  |  |  | 1girl, green_vest, white_shirt, yellow_bowtie, bangs, blush, simple_background, solo, upper_body, white_background, long_sleeves, open_mouth, :d, looking_at_viewer |
| 4 | 7 |  |  |  |  |  | 1girl, solo, hair_flower, open_mouth, dress, necklace, :d, looking_at_viewer, medium_breasts |
| 5 | 15 |  |  |  |  |  | 1girl, looking_at_viewer, solo, bracelet, navel, open_mouth, blush, cleavage, hair_flower, yellow_bikini, :d, frilled_bikini, medium_breasts, necklace, bangs, collarbone, front-tie_top, side-tie_bikini_bottom, simple_background, white_background, cowboy_shot, large_breasts |
| 6 | 16 |  |  |  |  |  | 1boy, 1girl, hetero, solo_focus, blush, penis, sweat, thighhighs, nipples, sex, vaginal, large_breasts, open_mouth, female_pubic_hair, nude, cowgirl_position, girl_on_top, navel, spread_legs, cum_in_pussy, mosaic_censoring, smile |
| 7 | 8 |  |  |  |  |  | 1girl, playboy_bunny, rabbit_ears, solo, detached_collar, bowtie, wrist_cuffs, blush, leotard, smile, fishnet_pantyhose, large_breasts, open_mouth, rabbit_tail |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | solo | zettai_ryouiki | pencil_skirt | black_thighhighs | open_mouth | headset | smile | one_eye_closed | bow | looking_at_viewer | vest | long_sleeves | white_shirt | black_skirt | green_vest | :d | bangs | yellow_bowtie | collared_shirt | standing | dress_shirt | miniskirt | simple_background | full_body | white_background | sandals | upper_body | hair_flower | dress | necklace | medium_breasts | bracelet | navel | cleavage | yellow_bikini | frilled_bikini | collarbone | front-tie_top | side-tie_bikini_bottom | cowboy_shot | large_breasts | 1boy | hetero | solo_focus | penis | sweat | thighhighs | nipples | sex | vaginal | female_pubic_hair | nude | cowgirl_position | girl_on_top | spread_legs | cum_in_pussy | mosaic_censoring | playboy_bunny | rabbit_ears | detached_collar | bowtie | wrist_cuffs | leotard | fishnet_pantyhose | rabbit_tail |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-----------------|:---------------|:-------------------|:-------------|:----------|:--------|:-----------------|:------|:--------------------|:-------|:---------------|:--------------|:--------------|:-------------|:-----|:--------|:----------------|:-----------------|:-----------|:--------------|:------------|:--------------------|:------------|:-------------------|:----------|:-------------|:--------------|:--------|:-----------|:-----------------|:-----------|:--------|:-----------|:----------------|:-----------------|:-------------|:----------------|:-------------------------|:--------------|:----------------|:-------|:---------|:-------------|:--------|:--------|:-------------|:----------|:------|:----------|:--------------------|:-------|:-------------------|:--------------|:--------------|:---------------|:-------------------|:----------------|:--------------|:------------------|:---------|:--------------|:----------|:--------------------|:--------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | | | | X | | | | | X | | X | X | | X | X | X | X | | | | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | X | | | | X | | | | | X | | | | | | X | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 15 |  |  |  |  |  | X | X | X | | | | X | | | | | X | | | | | | X | X | | | | | | X | | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 16 |  |  |  |  |  | X | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | X | X | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
| CyberHarem/otonashi_kotori_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T21:29:28+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T23:06:23+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of otonashi\_kotori/音無小鳥/오토나시코토리 (THE iDOLM@STER)
=========================================================
This is the dataset of otonashi\_kotori/音無小鳥/오토나시코토리 (THE iDOLM@STER), containing 500 images and their tags.
The core tags of this character are 'green\_hair, short\_hair, mole\_under\_mouth, mole, hairband, brown\_eyes, breasts, red\_eyes, yellow\_hairband', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
afd584eb0c9082578ccc864a939cb2b34046676b |
# Dataset of mizutani_eri/水谷絵理 (THE iDOLM@STER)
This is the dataset of mizutani_eri/水谷絵理 (THE iDOLM@STER), containing 255 images and their tags.
The core tags of this character are `blue_eyes, short_hair, hair_ornament, blue_hair, hairclip`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 255 | 155.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizutani_eri_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 255 | 127.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizutani_eri_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 415 | 210.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizutani_eri_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 255 | 151.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizutani_eri_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 415 | 243.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizutani_eri_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mizutani_eri_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | cute_&_girly_(idolmaster), 1girl, detached_sleeves, smile, bare_shoulders, blush, solo, brown_hair, open_mouth, skirt |
| 1 | 7 |  |  |  |  |  | 1girl, smile, solo |
| 2 | 6 |  |  |  |  |  | 1girl, 2girls, smile, black_hair |
| 3 | 10 |  |  |  |  |  | 1girl, skirt, solo, school_uniform, jacket, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | cute_&_girly_(idolmaster) | 1girl | detached_sleeves | smile | bare_shoulders | blush | solo | brown_hair | open_mouth | skirt | 2girls | black_hair | school_uniform | jacket |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------|:--------|:-------------------|:--------|:-----------------|:--------|:-------|:-------------|:-------------|:--------|:---------|:-------------|:-----------------|:---------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | |
| 1 | 7 |  |  |  |  |  | | X | | X | | | X | | | | | | | |
| 2 | 6 |  |  |  |  |  | | X | | X | | | | | | | X | X | | |
| 3 | 10 |  |  |  |  |  | | X | | | | | X | | X | X | | | X | X |
| CyberHarem/mizutani_eri_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T21:42:33+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:31:08+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of mizutani\_eri/水谷絵理 (THE iDOLM@STER)
==============================================
This is the dataset of mizutani\_eri/水谷絵理 (THE iDOLM@STER), containing 255 images and their tags.
The core tags of this character are 'blue\_eyes, short\_hair, hair\_ornament, blue\_hair, hairclip', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
1689993391902f8d1474fb0e5f34469a45c496da |
# ARRAU Version 2.1
- Project: https://sites.google.com/view/arrau/corpus
- Data source: https://catalog.ldc.upenn.edu/LDC2013T22 (Private distribution)
## Details
Sub-corpora (original split):
1. Gnome (no split)
1. Pear Stories (no split)
1. RST DTreeBank (train, dev, test)
1. Trains 91 (no split)
1. Trains 93 (no split)
1. VPC (train, test) <- VPC is a subset of RST
## Citation
```
@article{uryupina_artstein_bristot_cavicchio_delogu_rodriguez_poesio_2020,
title={Annotating a broad range of anaphoric phenomena, in a variety of genres: the ARRAU Corpus},
volume={26}, DOI={10.1017/S1351324919000056},
number={1},
journal={Natural Language Engineering},
publisher={Cambridge University Press},
author={Uryupina, Olga and Artstein, Ron and Bristot, Antonella and Cavicchio, Federica and Delogu, Francesca and Rodriguez, Kepa J. and Poesio, Massimo},
year={2020},
pages={95–128}
}
```
## Features
```python
{'chunk': [{'id': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'tag': Value(dtype='string', id=None)}],
'coref': [{'ambiguity': Value(dtype='string', id=None),
'category': Value(dtype='string', id=None),
'category_2': Value(dtype='string', id=None),
'comment': Value(dtype='string', id=None),
'coref_set': Value(dtype='string', id=None),
'gender': Value(dtype='string', id=None),
'generic': Value(dtype='string', id=None),
'generic_2': Value(dtype='string', id=None),
'gram_fnc': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'min_ids': Value(dtype='string', id=None),
'min_words': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'multiple_phrase_antecedents': Value(dtype='string', id=None),
'multiple_phrase_antecedents_2': Value(dtype='string', id=None),
'non_ref_type': Value(dtype='string', id=None),
'non_ref_type_2': Value(dtype='string', id=None),
'number': Value(dtype='string', id=None),
'object': Value(dtype='string', id=None),
'object_2': Value(dtype='string', id=None),
'on_map': Value(dtype='string', id=None),
'on_map_2': Value(dtype='string', id=None),
'person': Value(dtype='string', id=None),
'phrase_antecedent': Value(dtype='string', id=None),
'phrase_antecedent_2': Value(dtype='string', id=None),
'ref_type': Value(dtype='string', id=None),
'ref_type_2': Value(dtype='string', id=None),
'reference': Value(dtype='string', id=None),
'related_object': Value(dtype='string', id=None),
'related_object_2': Value(dtype='string', id=None),
'related_phrase': Value(dtype='string', id=None),
'related_phrase_2': Value(dtype='string', id=None),
'related_rel': Value(dtype='string', id=None),
'related_rel_2': Value(dtype='string', id=None),
'segment_antecedent': Value(dtype='string', id=None),
'segment_antecedent_2': Value(dtype='string', id=None),
'single_phrase_antecedent': Value(dtype='string', id=None),
'single_phrase_antecedent_2': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'type': Value(dtype='string', id=None)}],
'corpus': Value(dtype='string', id=None),
'document_name': Value(dtype='string', id=None),
'enamex': [{'id': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'tag': Value(dtype='string', id=None)}],
'markable': [{'id': Value(dtype='string', id=None),
'isprenominal': Value(dtype='string', id=None),
'label': Value(dtype='string', id=None),
'lemmata': Value(dtype='string', id=None),
'min_ids': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'pos': Value(dtype='string', id=None),
'sentenceid': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'type': Value(dtype='string', id=None)}],
'morph': [{'id': Value(dtype='string', id=None),
'lemma': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None)}],
'parse': [{'id': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'tag': Value(dtype='string', id=None)}],
'phrase': [{'ambiguity': Value(dtype='string', id=None),
'category': Value(dtype='string', id=None),
'category_2': Value(dtype='string', id=None),
'comment': Value(dtype='string', id=None),
'gender': Value(dtype='string', id=None),
'generic': Value(dtype='string', id=None),
'generic_2': Value(dtype='string', id=None),
'gram_fnc': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'min_ids': Value(dtype='string', id=None),
'min_words': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'multiple_phrase_antecedents': Value(dtype='string', id=None),
'multiple_phrase_antecedents_2': Value(dtype='string', id=None),
'non_ref_type': Value(dtype='string', id=None),
'non_ref_type_2': Value(dtype='string', id=None),
'number': Value(dtype='string', id=None),
'object': Value(dtype='string', id=None),
'object_2': Value(dtype='string', id=None),
'on_map': Value(dtype='string', id=None),
'on_map_2': Value(dtype='string', id=None),
'person': Value(dtype='string', id=None),
'phrase_antecedent': Value(dtype='string', id=None),
'phrase_antecedent_2': Value(dtype='string', id=None),
'ref_type': Value(dtype='string', id=None),
'ref_type_2': Value(dtype='string', id=None),
'reference': Value(dtype='string', id=None),
'related_object': Value(dtype='string', id=None),
'related_object_2': Value(dtype='string', id=None),
'related_phrase': Value(dtype='string', id=None),
'related_phrase_2': Value(dtype='string', id=None),
'related_rel': Value(dtype='string', id=None),
'related_rel_2': Value(dtype='string', id=None),
'segment_antecedent': Value(dtype='string', id=None),
'segment_antecedent_2': Value(dtype='string', id=None),
'single_phrase_antecedent': Value(dtype='string', id=None),
'single_phrase_antecedent_2': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'type': Value(dtype='string', id=None)}],
'pos': [{'id': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'tag': Value(dtype='string', id=None)}],
'sentence': [{'id': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'orderid': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None)}],
'split': Value(dtype='string', id=None),
'unit': [{'finite': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'subject': Value(dtype='string', id=None),
'utype': Value(dtype='string', id=None),
'verbed': Value(dtype='string', id=None)}],
'utterance': [{'id': Value(dtype='string', id=None),
'mmax_level': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None),
'type': Value(dtype='string', id=None)}],
'words': [{'id': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None)}]}
```
| coref-data/arrau_raw | [
"license:other",
"region:us"
] | 2024-01-15T21:42:41+00:00 | {"license": "other"} | 2024-01-21T05:46:07+00:00 | [] | [] | TAGS
#license-other #region-us
|
# ARRAU Version 2.1
- Project: URL
- Data source: URL (Private distribution)
## Details
Sub-corpora (original split):
1. Gnome (no split)
1. Pear Stories (no split)
1. RST DTreeBank (train, dev, test)
1. Trains 91 (no split)
1. Trains 93 (no split)
1. VPC (train, test) <- VPC is a subset of RST
## Features
| [
"# ARRAU Version 2.1\n\n- Project: URL\n- Data source: URL (Private distribution)",
"## Details\n\nSub-corpora (original split):\n1. Gnome (no split)\n1. Pear Stories (no split)\n1. RST DTreeBank (train, dev, test)\n1. Trains 91 (no split)\n1. Trains 93 (no split)\n1. VPC (train, test) <- VPC is a subset of RST",
"## Features"
] | [
"TAGS\n#license-other #region-us \n",
"# ARRAU Version 2.1\n\n- Project: URL\n- Data source: URL (Private distribution)",
"## Details\n\nSub-corpora (original split):\n1. Gnome (no split)\n1. Pear Stories (no split)\n1. RST DTreeBank (train, dev, test)\n1. Trains 91 (no split)\n1. Trains 93 (no split)\n1. VPC (train, test) <- VPC is a subset of RST",
"## Features"
] | [
11,
20,
75,
3
] | [
"passage: TAGS\n#license-other #region-us \n# ARRAU Version 2.1\n\n- Project: URL\n- Data source: URL (Private distribution)## Details\n\nSub-corpora (original split):\n1. Gnome (no split)\n1. Pear Stories (no split)\n1. RST DTreeBank (train, dev, test)\n1. Trains 91 (no split)\n1. Trains 93 (no split)\n1. VPC (train, test) <- VPC is a subset of RST## Features"
] |
33742de72d6859ce5c97018f9138335abbfab40b | # Dataset Card for "oasst2_dpo_pairs"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Usage](#usage)
- [Languages](#languages)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
Dataset transferred into the structure for trainig with DPO and can be used with the [Alignment Handbook](https://github.com/huggingface/alignment-handbook/tree/main)
The structure follows mostly the same scheme as [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
### Usage
To load the dataset, run:
```python
from datasets import load_dataset
ds = load_dataset("alexredna/oasst2_dpo_pairs")
```
### Languages
Base dataset filtered to only contain: German, English, Spanish and Frensh conversations.
## Dataset Creation
I used the following script for converting the oaast2 dataset:
```python
from datasets import Dataset, load_dataset
import pandas as pd
def build_tree(df):
tree = {}
message_dict = df.set_index('message_id').to_dict(orient='index')
for message_id, message in message_dict.items():
parent_id = message['parent_id']
if parent_id is None or pd.isna(parent_id):
tree[message_id] = message
tree[message_id]['replies'] = []
else:
if parent_id in message_dict:
if 'replies' not in message_dict[parent_id]:
message_dict[parent_id]['replies'] = []
message_dict[parent_id]['replies'].append(message)
return tree
def convert_for_dpo(entry):
example = dict()
example["system"] = ""
prompt_id = entry["message_tree_id"]
prompt = entry["text"]
chosen = []
rejected = []
chosen_reply = entry["replies"][0]
rejected_reply = entry["replies"][1]
score_chosen = len(entry["replies"]) - chosen_reply["rank"]
score_rejected = len(entry["replies"]) - rejected_reply["rank"]
chosen.append({"role": "user", "content": prompt})
chosen.append({"role": "assistant", "content": entry["replies"][0]["text"]})
rejected.append({"role": "user", "content": prompt})
rejected.append({"role": "assistant", "content": entry["replies"][1]["text"]})
return {"prompt_id": prompt_id, "prompt": prompt,"messages": chosen, "chosen": chosen, "rejected": rejected, "score_chosen": score_chosen, "score_rejected": score_rejected, "lang": entry["lang"]}
oasst2 = load_dataset("OpenAssistant/oasst2")
df = oasst2["train"].to_pandas()
df_multi = df.loc[df['lang'].isin(['en', 'de', 'es', 'fr'])]
tree = build_tree(df_multi)
transformed_for_dpo = []
for row in tree.values():
try:
transformed_for_dpo.append(convert_for_dpo(row))
except:
print("row does not contain chosen or rejected values")
df = pd.DataFrame.from_records(transformed_for_dpo)
ds = Dataset.from_pandas(df)
ds.push_to_hub("oasst2_dpo_pairs", token="<token>")
```
### Licensing Information
[Apache-2.0](https://huggingface.co/datasets?license=license%3Aapache-2.0)
### Citation Information
This dataset was converted from [OpenAssistant/oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2)
| alexredna/oasst2_dpo_pairs | [
"language:en",
"language:de",
"language:es",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2024-01-15T21:44:28+00:00 | {"language": ["en", "de", "es", "fr"], "license": "apache-2.0", "dataset_info": {"features": [{"name": "prompt_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "score_chosen", "dtype": "float64"}, {"name": "score_rejected", "dtype": "float64"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38577779, "num_examples": 10046}], "download_size": 23169558, "dataset_size": 38577779}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-18T20:44:44+00:00 | [] | [
"en",
"de",
"es",
"fr"
] | TAGS
#language-English #language-German #language-Spanish #language-French #license-apache-2.0 #region-us
| # Dataset Card for "oasst2_dpo_pairs"
## Table of Contents
- Table of Contents
- Dataset Description
- Usage
- Languages
- Dataset Creation
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
Dataset transferred into the structure for trainig with DPO and can be used with the Alignment Handbook
The structure follows mostly the same scheme as HuggingFaceH4/ultrafeedback_binarized
### Usage
To load the dataset, run:
### Languages
Base dataset filtered to only contain: German, English, Spanish and Frensh conversations.
## Dataset Creation
I used the following script for converting the oaast2 dataset:
### Licensing Information
Apache-2.0
This dataset was converted from OpenAssistant/oasst2
| [
"# Dataset Card for \"oasst2_dpo_pairs\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Usage\n - Languages\n- Dataset Creation\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\nDataset transferred into the structure for trainig with DPO and can be used with the Alignment Handbook\nThe structure follows mostly the same scheme as HuggingFaceH4/ultrafeedback_binarized",
"### Usage\nTo load the dataset, run:",
"### Languages\nBase dataset filtered to only contain: German, English, Spanish and Frensh conversations.",
"## Dataset Creation\nI used the following script for converting the oaast2 dataset:",
"### Licensing Information\n\nApache-2.0\n\n\n\nThis dataset was converted from OpenAssistant/oasst2"
] | [
"TAGS\n#language-English #language-German #language-Spanish #language-French #license-apache-2.0 #region-us \n",
"# Dataset Card for \"oasst2_dpo_pairs\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Usage\n - Languages\n- Dataset Creation\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\nDataset transferred into the structure for trainig with DPO and can be used with the Alignment Handbook\nThe structure follows mostly the same scheme as HuggingFaceH4/ultrafeedback_binarized",
"### Usage\nTo load the dataset, run:",
"### Languages\nBase dataset filtered to only contain: German, English, Spanish and Frensh conversations.",
"## Dataset Creation\nI used the following script for converting the oaast2 dataset:",
"### Licensing Information\n\nApache-2.0\n\n\n\nThis dataset was converted from OpenAssistant/oasst2"
] | [
33,
17,
39,
52,
12,
24,
21,
26
] | [
"passage: TAGS\n#language-English #language-German #language-Spanish #language-French #license-apache-2.0 #region-us \n# Dataset Card for \"oasst2_dpo_pairs\"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Usage\n - Languages\n- Dataset Creation\n- Additional Information\n - Licensing Information\n - Citation Information## Dataset Description\nDataset transferred into the structure for trainig with DPO and can be used with the Alignment Handbook\nThe structure follows mostly the same scheme as HuggingFaceH4/ultrafeedback_binarized### Usage\nTo load the dataset, run:### Languages\nBase dataset filtered to only contain: German, English, Spanish and Frensh conversations.## Dataset Creation\nI used the following script for converting the oaast2 dataset:### Licensing Information\n\nApache-2.0\n\n\n\nThis dataset was converted from OpenAssistant/oasst2"
] |
25a0c5ab86e796db79598c2045f654f9453065a2 | # Dataset Card for "3CLPro"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jarod0411/3CLPro | [
"region:us"
] | 2024-01-15T21:57:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "DockingScore", "dtype": "float64"}, {"name": "smiles", "dtype": "string"}, {"name": "scaffold_smiles", "dtype": "string"}, {"name": "selfies", "dtype": "string"}, {"name": "scaffold_selfies", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 370493345.0, "num_examples": 798405}, {"name": "validation", "num_bytes": 92645720.0, "num_examples": 199607}], "download_size": 157846477, "dataset_size": 463139065.0}} | 2024-01-15T21:58:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "3CLPro"
More Information needed | [
"# Dataset Card for \"3CLPro\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"3CLPro\"\n\nMore Information needed"
] | [
6,
13
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"3CLPro\"\n\nMore Information needed"
] |
67bdc29ce108c4ad43548c548827288c4d5395fe | # Dataset Card for "processed_control_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tuanmanh28/processed_control_dataset | [
"region:us"
] | 2024-01-15T21:59:18+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "input_values", "sequence": "float32"}, {"name": "input_length", "dtype": "int64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 980400296.0, "num_examples": 3893}, {"name": "test", "num_bytes": 246218884.0, "num_examples": 974}], "download_size": 1029622913, "dataset_size": 1226619180.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-01-15T22:01:46+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_control_dataset"
More Information needed | [
"# Dataset Card for \"processed_control_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_control_dataset\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"processed_control_dataset\"\n\nMore Information needed"
] |
68ec74126d657e6eafd104cfdedba41eb8dfdefb |
# Dataset of i_14/伊14 (Kantai Collection)
This is the dataset of i_14/伊14 (Kantai Collection), containing 17 images and their tags.
The core tags of this character are `brown_eyes, hair_between_eyes, short_hair, asymmetrical_hair, black_hair, breasts, hat, small_breasts, framed_breasts, headphones`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 17 | 29.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_14_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 17 | 15.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_14_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 49 | 38.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_14_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 17 | 26.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_14_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 49 | 57.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_14_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/i_14_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, sailor_collar, solo, open_mouth, school_swimsuit, blush, looking_at_viewer, medium_breasts, one-piece_swimsuit, sitting, navel, nipples, partially_fingerless_gloves |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | sailor_collar | solo | open_mouth | school_swimsuit | blush | looking_at_viewer | medium_breasts | one-piece_swimsuit | sitting | navel | nipples | partially_fingerless_gloves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:-------|:-------------|:------------------|:--------|:--------------------|:-----------------|:---------------------|:----------|:--------|:----------|:------------------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/i_14_kantaicollection | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T22:04:18+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:08:26+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of i\_14/伊14 (Kantai Collection)
========================================
This is the dataset of i\_14/伊14 (Kantai Collection), containing 17 images and their tags.
The core tags of this character are 'brown\_eyes, hair\_between\_eyes, short\_hair, asymmetrical\_hair, black\_hair, breasts, hat, small\_breasts, framed\_breasts, headphones', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
5714182b8258641c214961271a4829fc4f041149 | # Dataset Card for "transfer_matrix_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zhan1993/transfer_matrix_v3 | [
"region:us"
] | 2024-01-15T22:06:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "expert_name", "dtype": "string"}, {"name": "task_eval_on", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 5191944, "num_examples": 68989}], "download_size": 1047085, "dataset_size": 5191944}} | 2024-01-15T22:06:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "transfer_matrix_v3"
More Information needed | [
"# Dataset Card for \"transfer_matrix_v3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"transfer_matrix_v3\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"transfer_matrix_v3\"\n\nMore Information needed"
] |
4442b853a5daa683f9c709b00fe1c4eeed83b137 |
# Dataset of nishijima_kai/西島櫂 (THE iDOLM@STER)
This is the dataset of nishijima_kai/西島櫂 (THE iDOLM@STER), containing 43 images and their tags.
The core tags of this character are `short_hair, ahoge, brown_eyes, brown_hair, breasts, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 43 | 47.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishijima_kai_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 43 | 33.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishijima_kai_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 100 | 65.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishijima_kai_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 43 | 45.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishijima_kai_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 100 | 84.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishijima_kai_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nishijima_kai_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, solo, smile, midriff, character_name, eyelashes, flipped_hair, card_(medium), navel, open_mouth, sun_symbol, visor_cap, cleavage, bike_shorts, orange_background, sandals, sparkle |
| 1 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, blush, large_breasts, simple_background, solo, white_background, smile, cleavage, bangs, collarbone, competition_swimsuit, navel, shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | midriff | character_name | eyelashes | flipped_hair | card_(medium) | navel | open_mouth | sun_symbol | visor_cap | cleavage | bike_shorts | orange_background | sandals | sparkle | looking_at_viewer | blush | large_breasts | simple_background | white_background | bangs | collarbone | competition_swimsuit | shirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:----------|:-----------------|:------------|:---------------|:----------------|:--------|:-------------|:-------------|:------------|:-----------|:--------------|:--------------------|:----------|:----------|:--------------------|:--------|:----------------|:--------------------|:-------------------|:--------|:-------------|:-----------------------|:--------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | | | | | | X | | | | X | | | | | X | X | X | X | X | X | X | X | X |
| CyberHarem/nishijima_kai_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T22:10:06+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:20:12+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of nishijima\_kai/西島櫂 (THE iDOLM@STER)
==============================================
This is the dataset of nishijima\_kai/西島櫂 (THE iDOLM@STER), containing 43 images and their tags.
The core tags of this character are 'short\_hair, ahoge, brown\_eyes, brown\_hair, breasts, medium\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
2cdf4681bcaf8c2cd4165b07282771429e2eb03b |
# Dataset of hidaka_ai (THE iDOLM@STER)
This is the dataset of hidaka_ai (THE iDOLM@STER), containing 248 images and their tags.
The core tags of this character are `brown_hair, short_hair, brown_eyes, antenna_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 248 | 154.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_ai_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 248 | 120.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_ai_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 399 | 200.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_ai_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 248 | 146.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_ai_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 399 | 237.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_ai_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hidaka_ai_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 42 |  |  |  |  |  | 1girl, smile, open_mouth, cute_&_girly_(idolmaster), solo, blush, gloves |
| 1 | 7 |  |  |  |  |  | 1girl, hoodie, solo, open_mouth, smile |
| 2 | 7 |  |  |  |  |  | 1girl, cleavage, solo, medium_breasts, navel, smile, pink_bikini, side-tie_bikini_bottom |
| 3 | 7 |  |  |  |  |  | 1girl, hetero, nipples, penis, solo_focus, 1boy, blush, medium_breasts, open_mouth, sex, vaginal, cum_in_pussy, bar_censor, girl_on_top, mosaic_censoring, nude, straddling, tears |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | open_mouth | cute_&_girly_(idolmaster) | solo | blush | gloves | hoodie | cleavage | medium_breasts | navel | pink_bikini | side-tie_bikini_bottom | hetero | nipples | penis | solo_focus | 1boy | sex | vaginal | cum_in_pussy | bar_censor | girl_on_top | mosaic_censoring | nude | straddling | tears |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:----------------------------|:-------|:--------|:---------|:---------|:-----------|:-----------------|:--------|:--------------|:-------------------------|:---------|:----------|:--------|:-------------|:-------|:------|:----------|:---------------|:-------------|:--------------|:-------------------|:-------|:-------------|:--------|
| 0 | 42 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | | X | | | X | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | X | | | X | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/hidaka_ai_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T22:33:07+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T23:39:14+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of hidaka\_ai (THE iDOLM@STER)
======================================
This is the dataset of hidaka\_ai (THE iDOLM@STER), containing 248 images and their tags.
The core tags of this character are 'brown\_hair, short\_hair, brown\_eyes, antenna\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
381134bb8b0326bf8a5d1cf900a5578d9cc4a149 |
# Dataset of sakurai_yumeko (THE iDOLM@STER)
This is the dataset of sakurai_yumeko (THE iDOLM@STER), containing 69 images and their tags.
The core tags of this character are `long_hair, green_eyes, brown_hair, side_ponytail, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 69 | 35.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurai_yumeko_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 69 | 30.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurai_yumeko_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 114 | 50.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurai_yumeko_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 69 | 34.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurai_yumeko_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 114 | 55.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurai_yumeko_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sakurai_yumeko_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | 1girl, detached_sleeves, bare_shoulders, blush, solo, midriff, star_(symbol), necklace, navel, smile, skirt, striped, clothes_around_waist, open_mouth |
| 1 | 8 |  |  |  |  |  | boots, midriff, thighhighs, navel, skirt, 3girls, clothes_around_waist, crop_top, smile, 1girl, 2girls, detached_sleeves, open_mouth, star_(symbol), striped |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | detached_sleeves | bare_shoulders | blush | solo | midriff | star_(symbol) | necklace | navel | smile | skirt | striped | clothes_around_waist | open_mouth | boots | thighhighs | 3girls | crop_top | 2girls |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:-----------------|:--------|:-------|:----------|:----------------|:-----------|:--------|:--------|:--------|:----------|:-----------------------|:-------------|:--------|:-------------|:---------|:-----------|:---------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | | | | X | X | | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/sakurai_yumeko_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T22:33:18+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:45:35+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of sakurai\_yumeko (THE iDOLM@STER)
===========================================
This is the dataset of sakurai\_yumeko (THE iDOLM@STER), containing 69 images and their tags.
The core tags of this character are 'long\_hair, green\_eyes, brown\_hair, side\_ponytail, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ba31959540aaf9237582dff6d04001d141eb5c1b | # Dataset Card for "running-science"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | florentgbelidji/running-science | [
"region:us"
] | 2024-01-15T22:34:46+00:00 | {"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "metadata", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2411893, "num_examples": 196}], "download_size": 1288348, "dataset_size": 2411893}} | 2024-01-15T22:54:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "running-science"
More Information needed | [
"# Dataset Card for \"running-science\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"running-science\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"running-science\"\n\nMore Information needed"
] |
de17280d0cbf597a7d15be25b88f3b8957ee67b8 |
# Dataset of okamoto_manami (THE iDOLM@STER)
This is the dataset of okamoto_manami (THE iDOLM@STER), containing 12 images and their tags.
The core tags of this character are `ahoge, glasses, green_eyes, green_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:---------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 12 | 5.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okamoto_manami_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 12 | 4.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okamoto_manami_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 19 | 6.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okamoto_manami_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 12 | 5.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okamoto_manami_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 19 | 8.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okamoto_manami_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/okamoto_manami_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, solo, open_mouth, smile, blush |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | open_mouth | smile | blush |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-------------|:--------|:--------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X |
| CyberHarem/okamoto_manami_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T22:40:41+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:42:52+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of okamoto\_manami (THE iDOLM@STER)
===========================================
This is the dataset of okamoto\_manami (THE iDOLM@STER), containing 12 images and their tags.
The core tags of this character are 'ahoge, glasses, green\_eyes, green\_hair, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
0021b1425c3d4606dc8e3c45b7ccd69574973e26 |
# Dataset of hidaka_mai (THE iDOLM@STER)
This is the dataset of hidaka_mai (THE iDOLM@STER), containing 21 images and their tags.
The core tags of this character are `brown_hair, long_hair, ponytail, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 21 | 12.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_mai_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 21 | 10.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_mai_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 38 | 17.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_mai_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 21 | 11.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_mai_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 38 | 19.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hidaka_mai_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hidaka_mai_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------|
| 0 | 21 |  |  |  |  |  | 1girl, smile, solo |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|
| 0 | 21 |  |  |  |  |  | X | X | X |
| CyberHarem/hidaka_mai_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T22:40:47+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:59:12+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of hidaka\_mai (THE iDOLM@STER)
=======================================
This is the dataset of hidaka\_mai (THE iDOLM@STER), containing 21 images and their tags.
The core tags of this character are 'brown\_hair, long\_hair, ponytail, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
76adb23a46cd1bde1e7028503ad4e4431ca94cbb |
# Dataset of ozaki_reiko (THE iDOLM@STER)
This is the dataset of ozaki_reiko (THE iDOLM@STER), containing 26 images and their tags.
The core tags of this character are `long_hair, brown_eyes, breasts, brown_hair, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 26 | 9.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 26 | 8.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 38 | 13.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 26 | 9.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 38 | 13.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ozaki_reiko_theidolmster/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ozaki_reiko_theidolmster',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, blush, smile, solo, jacket, 2girls, skirt, cleavage |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | smile | solo | jacket | 2girls | skirt | cleavage |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------|:-------|:---------|:---------|:--------|:-----------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X |
| CyberHarem/ozaki_reiko_theidolmster | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T22:41:51+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T22:46:27+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of ozaki\_reiko (THE iDOLM@STER)
========================================
This is the dataset of ozaki\_reiko (THE iDOLM@STER), containing 26 images and their tags.
The core tags of this character are 'long\_hair, brown\_eyes, breasts, brown\_hair, medium\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
f19142c84059e241d61c5e5074aadcac3596c532 | Reformatted version of ths dataset: https://huggingface.co/datasets/jihyoung/ConversationChronicles | PocketDoc/ConversationChronicles-sharegpt | [
"region:us"
] | 2024-01-15T22:51:47+00:00 | {} | 2024-01-15T23:13:24+00:00 | [] | [] | TAGS
#region-us
| Reformatted version of ths dataset: URL | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
d236177ba38102bda8aebae98fac458690b47359 |
# Dataset Card for Evaluation run of brucethemoose/Yi-34B-200K-DARE-megamerge-v8
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [brucethemoose/Yi-34B-200K-DARE-megamerge-v8](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-megamerge-v8",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T22:55:33.545655](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-megamerge-v8/blob/main/results_2024-01-15T22-55-33.545655.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7655125295847778,
"acc_stderr": 0.02809215125212353,
"acc_norm": 0.7702186096680189,
"acc_norm_stderr": 0.02861631002408439,
"mc1": 0.40636474908200737,
"mc1_stderr": 0.0171938358120939,
"mc2": 0.5630835911486544,
"mc2_stderr": 0.01535913374944441
},
"harness|arc:challenge|25": {
"acc": 0.6501706484641638,
"acc_stderr": 0.013936809212158296,
"acc_norm": 0.6774744027303754,
"acc_norm_stderr": 0.013659980894277371
},
"harness|hellaswag|10": {
"acc": 0.6590320653256323,
"acc_stderr": 0.004730658073041563,
"acc_norm": 0.8605855407289384,
"acc_norm_stderr": 0.0034567060380547555
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7185185185185186,
"acc_stderr": 0.038850042458002526,
"acc_norm": 0.7185185185185186,
"acc_norm_stderr": 0.038850042458002526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8881578947368421,
"acc_stderr": 0.02564834125169361,
"acc_norm": 0.8881578947368421,
"acc_norm_stderr": 0.02564834125169361
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8188679245283019,
"acc_stderr": 0.023702963526757798,
"acc_norm": 0.8188679245283019,
"acc_norm_stderr": 0.023702963526757798
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8958333333333334,
"acc_stderr": 0.025545239210256917,
"acc_norm": 0.8958333333333334,
"acc_norm_stderr": 0.025545239210256917
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562429,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562429
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.03345036916788991,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.03345036916788991
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5882352941176471,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.5882352941176471,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7914893617021277,
"acc_stderr": 0.026556982117838746,
"acc_norm": 0.7914893617021277,
"acc_norm_stderr": 0.026556982117838746
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6052631578947368,
"acc_stderr": 0.045981880578165414,
"acc_norm": 0.6052631578947368,
"acc_norm_stderr": 0.045981880578165414
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7724137931034483,
"acc_stderr": 0.03493950380131183,
"acc_norm": 0.7724137931034483,
"acc_norm_stderr": 0.03493950380131183
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.716931216931217,
"acc_stderr": 0.023201392938194978,
"acc_norm": 0.716931216931217,
"acc_norm_stderr": 0.023201392938194978
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5317460317460317,
"acc_stderr": 0.04463112720677173,
"acc_norm": 0.5317460317460317,
"acc_norm_stderr": 0.04463112720677173
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9032258064516129,
"acc_stderr": 0.016818943416345197,
"acc_norm": 0.9032258064516129,
"acc_norm_stderr": 0.016818943416345197
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6600985221674877,
"acc_stderr": 0.033327690684107895,
"acc_norm": 0.6600985221674877,
"acc_norm_stderr": 0.033327690684107895
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8606060606060606,
"acc_stderr": 0.0270459488258654,
"acc_norm": 0.8606060606060606,
"acc_norm_stderr": 0.0270459488258654
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9343434343434344,
"acc_stderr": 0.017646526677233335,
"acc_norm": 0.9343434343434344,
"acc_norm_stderr": 0.017646526677233335
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9792746113989638,
"acc_stderr": 0.010281417011909039,
"acc_norm": 0.9792746113989638,
"acc_norm_stderr": 0.010281417011909039
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8179487179487179,
"acc_stderr": 0.019565236782930887,
"acc_norm": 0.8179487179487179,
"acc_norm_stderr": 0.019565236782930887
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.030296771286067323,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.030296771286067323
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8487394957983193,
"acc_stderr": 0.023274255898707952,
"acc_norm": 0.8487394957983193,
"acc_norm_stderr": 0.023274255898707952
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5099337748344371,
"acc_stderr": 0.04081677107248436,
"acc_norm": 0.5099337748344371,
"acc_norm_stderr": 0.04081677107248436
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9302752293577982,
"acc_stderr": 0.010919426411848617,
"acc_norm": 0.9302752293577982,
"acc_norm_stderr": 0.010919426411848617
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6481481481481481,
"acc_stderr": 0.032568505702936464,
"acc_norm": 0.6481481481481481,
"acc_norm_stderr": 0.032568505702936464
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9313725490196079,
"acc_stderr": 0.017744453647073315,
"acc_norm": 0.9313725490196079,
"acc_norm_stderr": 0.017744453647073315
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9113924050632911,
"acc_stderr": 0.018498315206865384,
"acc_norm": 0.9113924050632911,
"acc_norm_stderr": 0.018498315206865384
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8116591928251121,
"acc_stderr": 0.026241132996407252,
"acc_norm": 0.8116591928251121,
"acc_norm_stderr": 0.026241132996407252
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8625954198473282,
"acc_stderr": 0.030194823996804475,
"acc_norm": 0.8625954198473282,
"acc_norm_stderr": 0.030194823996804475
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8925619834710744,
"acc_stderr": 0.028268812192540627,
"acc_norm": 0.8925619834710744,
"acc_norm_stderr": 0.028268812192540627
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.03038159675665168,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.03038159675665168
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8588957055214724,
"acc_stderr": 0.027351605518389752,
"acc_norm": 0.8588957055214724,
"acc_norm_stderr": 0.027351605518389752
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6071428571428571,
"acc_stderr": 0.04635550135609976,
"acc_norm": 0.6071428571428571,
"acc_norm_stderr": 0.04635550135609976
},
"harness|hendrycksTest-management|5": {
"acc": 0.8737864077669902,
"acc_stderr": 0.03288180278808628,
"acc_norm": 0.8737864077669902,
"acc_norm_stderr": 0.03288180278808628
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9316239316239316,
"acc_stderr": 0.016534627684311357,
"acc_norm": 0.9316239316239316,
"acc_norm_stderr": 0.016534627684311357
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.88,
"acc_stderr": 0.032659863237109066,
"acc_norm": 0.88,
"acc_norm_stderr": 0.032659863237109066
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9080459770114943,
"acc_stderr": 0.010333225570778516,
"acc_norm": 0.9080459770114943,
"acc_norm_stderr": 0.010333225570778516
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8294797687861272,
"acc_stderr": 0.020247961569303728,
"acc_norm": 0.8294797687861272,
"acc_norm_stderr": 0.020247961569303728
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7441340782122905,
"acc_stderr": 0.014593620923210754,
"acc_norm": 0.7441340782122905,
"acc_norm_stderr": 0.014593620923210754
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.021339479988816024,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.021339479988816024
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8392282958199357,
"acc_stderr": 0.020862388082391888,
"acc_norm": 0.8392282958199357,
"acc_norm_stderr": 0.020862388082391888
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8765432098765432,
"acc_stderr": 0.01830386880689179,
"acc_norm": 0.8765432098765432,
"acc_norm_stderr": 0.01830386880689179
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6418439716312057,
"acc_stderr": 0.028602085862759426,
"acc_norm": 0.6418439716312057,
"acc_norm_stderr": 0.028602085862759426
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.605606258148631,
"acc_stderr": 0.012482141665631177,
"acc_norm": 0.605606258148631,
"acc_norm_stderr": 0.012482141665631177
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.022368672562886747,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.022368672562886747
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8251633986928104,
"acc_stderr": 0.015366167064780648,
"acc_norm": 0.8251633986928104,
"acc_norm_stderr": 0.015366167064780648
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7363636363636363,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.7363636363636363,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8367346938775511,
"acc_stderr": 0.02366169917709861,
"acc_norm": 0.8367346938775511,
"acc_norm_stderr": 0.02366169917709861
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.900497512437811,
"acc_stderr": 0.021166216304659393,
"acc_norm": 0.900497512437811,
"acc_norm_stderr": 0.021166216304659393
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.92,
"acc_stderr": 0.0272659924344291,
"acc_norm": 0.92,
"acc_norm_stderr": 0.0272659924344291
},
"harness|hendrycksTest-virology|5": {
"acc": 0.572289156626506,
"acc_stderr": 0.038515976837185335,
"acc_norm": 0.572289156626506,
"acc_norm_stderr": 0.038515976837185335
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8830409356725146,
"acc_stderr": 0.024648068961366152,
"acc_norm": 0.8830409356725146,
"acc_norm_stderr": 0.024648068961366152
},
"harness|truthfulqa:mc|0": {
"mc1": 0.40636474908200737,
"mc1_stderr": 0.0171938358120939,
"mc2": 0.5630835911486544,
"mc2_stderr": 0.01535913374944441
},
"harness|winogrande|5": {
"acc": 0.8279400157853196,
"acc_stderr": 0.010607731615247022
},
"harness|gsm8k|5": {
"acc": 0.6542835481425322,
"acc_stderr": 0.013100422990441573
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-megamerge-v8 | [
"region:us"
] | 2024-01-15T22:57:45+00:00 | {"pretty_name": "Evaluation run of brucethemoose/Yi-34B-200K-DARE-megamerge-v8", "dataset_summary": "Dataset automatically created during the evaluation run of model [brucethemoose/Yi-34B-200K-DARE-megamerge-v8](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-megamerge-v8\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T22:55:33.545655](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-megamerge-v8/blob/main/results_2024-01-15T22-55-33.545655.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7655125295847778,\n \"acc_stderr\": 0.02809215125212353,\n \"acc_norm\": 0.7702186096680189,\n \"acc_norm_stderr\": 0.02861631002408439,\n \"mc1\": 0.40636474908200737,\n \"mc1_stderr\": 0.0171938358120939,\n \"mc2\": 0.5630835911486544,\n \"mc2_stderr\": 0.01535913374944441\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6501706484641638,\n \"acc_stderr\": 0.013936809212158296,\n \"acc_norm\": 0.6774744027303754,\n \"acc_norm_stderr\": 0.013659980894277371\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6590320653256323,\n \"acc_stderr\": 0.004730658073041563,\n \"acc_norm\": 0.8605855407289384,\n \"acc_norm_stderr\": 0.0034567060380547555\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7185185185185186,\n \"acc_stderr\": 0.038850042458002526,\n \"acc_norm\": 0.7185185185185186,\n \"acc_norm_stderr\": 0.038850042458002526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8881578947368421,\n \"acc_stderr\": 0.02564834125169361,\n \"acc_norm\": 0.8881578947368421,\n \"acc_norm_stderr\": 0.02564834125169361\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8188679245283019,\n \"acc_stderr\": 0.023702963526757798,\n \"acc_norm\": 0.8188679245283019,\n \"acc_norm_stderr\": 0.023702963526757798\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8958333333333334,\n \"acc_stderr\": 0.025545239210256917,\n \"acc_norm\": 0.8958333333333334,\n \"acc_norm_stderr\": 0.025545239210256917\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562429,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562429\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.03345036916788991,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.03345036916788991\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.04897104952726366,\n \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.04897104952726366\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7914893617021277,\n \"acc_stderr\": 0.026556982117838746,\n \"acc_norm\": 0.7914893617021277,\n \"acc_norm_stderr\": 0.026556982117838746\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.045981880578165414,\n \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.045981880578165414\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7724137931034483,\n \"acc_stderr\": 0.03493950380131183,\n \"acc_norm\": 0.7724137931034483,\n \"acc_norm_stderr\": 0.03493950380131183\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.716931216931217,\n \"acc_stderr\": 0.023201392938194978,\n \"acc_norm\": 0.716931216931217,\n \"acc_norm_stderr\": 0.023201392938194978\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5317460317460317,\n \"acc_stderr\": 0.04463112720677173,\n \"acc_norm\": 0.5317460317460317,\n \"acc_norm_stderr\": 0.04463112720677173\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9032258064516129,\n \"acc_stderr\": 0.016818943416345197,\n \"acc_norm\": 0.9032258064516129,\n \"acc_norm_stderr\": 0.016818943416345197\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6600985221674877,\n \"acc_stderr\": 0.033327690684107895,\n \"acc_norm\": 0.6600985221674877,\n \"acc_norm_stderr\": 0.033327690684107895\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8606060606060606,\n \"acc_stderr\": 0.0270459488258654,\n \"acc_norm\": 0.8606060606060606,\n \"acc_norm_stderr\": 0.0270459488258654\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9343434343434344,\n \"acc_stderr\": 0.017646526677233335,\n \"acc_norm\": 0.9343434343434344,\n \"acc_norm_stderr\": 0.017646526677233335\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9792746113989638,\n \"acc_stderr\": 0.010281417011909039,\n \"acc_norm\": 0.9792746113989638,\n \"acc_norm_stderr\": 0.010281417011909039\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8179487179487179,\n \"acc_stderr\": 0.019565236782930887,\n \"acc_norm\": 0.8179487179487179,\n \"acc_norm_stderr\": 0.019565236782930887\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.030296771286067323,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.030296771286067323\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8487394957983193,\n \"acc_stderr\": 0.023274255898707952,\n \"acc_norm\": 0.8487394957983193,\n \"acc_norm_stderr\": 0.023274255898707952\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.5099337748344371,\n \"acc_stderr\": 0.04081677107248436,\n \"acc_norm\": 0.5099337748344371,\n \"acc_norm_stderr\": 0.04081677107248436\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9302752293577982,\n \"acc_stderr\": 0.010919426411848617,\n \"acc_norm\": 0.9302752293577982,\n \"acc_norm_stderr\": 0.010919426411848617\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6481481481481481,\n \"acc_stderr\": 0.032568505702936464,\n \"acc_norm\": 0.6481481481481481,\n \"acc_norm_stderr\": 0.032568505702936464\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9313725490196079,\n \"acc_stderr\": 0.017744453647073315,\n \"acc_norm\": 0.9313725490196079,\n \"acc_norm_stderr\": 0.017744453647073315\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9113924050632911,\n \"acc_stderr\": 0.018498315206865384,\n \"acc_norm\": 0.9113924050632911,\n \"acc_norm_stderr\": 0.018498315206865384\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8116591928251121,\n \"acc_stderr\": 0.026241132996407252,\n \"acc_norm\": 0.8116591928251121,\n \"acc_norm_stderr\": 0.026241132996407252\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8625954198473282,\n \"acc_stderr\": 0.030194823996804475,\n \"acc_norm\": 0.8625954198473282,\n \"acc_norm_stderr\": 0.030194823996804475\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8925619834710744,\n \"acc_stderr\": 0.028268812192540627,\n \"acc_norm\": 0.8925619834710744,\n \"acc_norm_stderr\": 0.028268812192540627\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.03038159675665168,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.03038159675665168\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8588957055214724,\n \"acc_stderr\": 0.027351605518389752,\n \"acc_norm\": 0.8588957055214724,\n \"acc_norm_stderr\": 0.027351605518389752\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6071428571428571,\n \"acc_stderr\": 0.04635550135609976,\n \"acc_norm\": 0.6071428571428571,\n \"acc_norm_stderr\": 0.04635550135609976\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8737864077669902,\n \"acc_stderr\": 0.03288180278808628,\n \"acc_norm\": 0.8737864077669902,\n \"acc_norm_stderr\": 0.03288180278808628\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9316239316239316,\n \"acc_stderr\": 0.016534627684311357,\n \"acc_norm\": 0.9316239316239316,\n \"acc_norm_stderr\": 0.016534627684311357\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.032659863237109066\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9080459770114943,\n \"acc_stderr\": 0.010333225570778516,\n \"acc_norm\": 0.9080459770114943,\n \"acc_norm_stderr\": 0.010333225570778516\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8294797687861272,\n \"acc_stderr\": 0.020247961569303728,\n \"acc_norm\": 0.8294797687861272,\n \"acc_norm_stderr\": 0.020247961569303728\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7441340782122905,\n \"acc_stderr\": 0.014593620923210754,\n \"acc_norm\": 0.7441340782122905,\n \"acc_norm_stderr\": 0.014593620923210754\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.021339479988816024,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.021339479988816024\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8392282958199357,\n \"acc_stderr\": 0.020862388082391888,\n \"acc_norm\": 0.8392282958199357,\n \"acc_norm_stderr\": 0.020862388082391888\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8765432098765432,\n \"acc_stderr\": 0.01830386880689179,\n \"acc_norm\": 0.8765432098765432,\n \"acc_norm_stderr\": 0.01830386880689179\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6418439716312057,\n \"acc_stderr\": 0.028602085862759426,\n \"acc_norm\": 0.6418439716312057,\n \"acc_norm_stderr\": 0.028602085862759426\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.605606258148631,\n \"acc_stderr\": 0.012482141665631177,\n \"acc_norm\": 0.605606258148631,\n \"acc_norm_stderr\": 0.012482141665631177\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8382352941176471,\n \"acc_stderr\": 0.022368672562886747,\n \"acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.022368672562886747\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.8251633986928104,\n \"acc_stderr\": 0.015366167064780648,\n \"acc_norm\": 0.8251633986928104,\n \"acc_norm_stderr\": 0.015366167064780648\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8367346938775511,\n \"acc_stderr\": 0.02366169917709861,\n \"acc_norm\": 0.8367346938775511,\n \"acc_norm_stderr\": 0.02366169917709861\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.900497512437811,\n \"acc_stderr\": 0.021166216304659393,\n \"acc_norm\": 0.900497512437811,\n \"acc_norm_stderr\": 0.021166216304659393\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8830409356725146,\n \"acc_stderr\": 0.024648068961366152,\n \"acc_norm\": 0.8830409356725146,\n \"acc_norm_stderr\": 0.024648068961366152\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.40636474908200737,\n \"mc1_stderr\": 0.0171938358120939,\n \"mc2\": 0.5630835911486544,\n \"mc2_stderr\": 0.01535913374944441\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8279400157853196,\n \"acc_stderr\": 0.010607731615247022\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6542835481425322,\n \"acc_stderr\": 0.013100422990441573\n }\n}\n```", "repo_url": "https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|arc:challenge|25_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|gsm8k|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hellaswag|10_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T22-55-33.545655.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["**/details_harness|winogrande|5_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T22-55-33.545655.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T22_55_33.545655", "path": ["results_2024-01-15T22-55-33.545655.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T22-55-33.545655.parquet"]}]}]} | 2024-01-15T22:58:06+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of brucethemoose/Yi-34B-200K-DARE-megamerge-v8
Dataset automatically created during the evaluation run of model brucethemoose/Yi-34B-200K-DARE-megamerge-v8 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T22:55:33.545655(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of brucethemoose/Yi-34B-200K-DARE-megamerge-v8\n\n\n\nDataset automatically created during the evaluation run of model brucethemoose/Yi-34B-200K-DARE-megamerge-v8 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T22:55:33.545655(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of brucethemoose/Yi-34B-200K-DARE-megamerge-v8\n\n\n\nDataset automatically created during the evaluation run of model brucethemoose/Yi-34B-200K-DARE-megamerge-v8 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T22:55:33.545655(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
205,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of brucethemoose/Yi-34B-200K-DARE-megamerge-v8\n\n\n\nDataset automatically created during the evaluation run of model brucethemoose/Yi-34B-200K-DARE-megamerge-v8 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T22:55:33.545655(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]"
] |
f626c3ee5dd8b224e3bf6d6de1e5cff0f38baf69 |
# Dataset of light_cruiser_hime/軽巡棲姫 (Kantai Collection)
This is the dataset of light_cruiser_hime/軽巡棲姫 (Kantai Collection), containing 11 images and their tags.
The core tags of this character are `black_hair, horns, long_hair, pale_skin, parted_bangs, covered_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 11 | 10.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/light_cruiser_hime_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 11 | 8.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/light_cruiser_hime_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 26 | 15.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/light_cruiser_hime_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 11 | 10.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/light_cruiser_hime_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 26 | 17.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/light_cruiser_hime_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/light_cruiser_hime_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, abyssal_ship, mask, solo, blindfold, elbow_gloves, bare_shoulders, black_dress, black_gloves, blush, short_dress, machinery, serafuku, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | abyssal_ship | mask | solo | blindfold | elbow_gloves | bare_shoulders | black_dress | black_gloves | blush | short_dress | machinery | serafuku | smile |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:-------|:------------|:---------------|:-----------------|:--------------|:---------------|:--------|:--------------|:------------|:-----------|:--------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/light_cruiser_hime_kantaicollection | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T23:04:10+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T23:06:36+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of light\_cruiser\_hime/軽巡棲姫 (Kantai Collection)
========================================================
This is the dataset of light\_cruiser\_hime/軽巡棲姫 (Kantai Collection), containing 11 images and their tags.
The core tags of this character are 'black\_hair, horns, long\_hair, pale\_skin, parted\_bangs, covered\_eyes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
1bbb008dab697a3c8a5fd0916f7a76bcaa21e78d |
# Dataset Card for Evaluation run of senseable/moe-x33
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [senseable/moe-x33](https://huggingface.co/senseable/moe-x33) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_senseable__moe-x33",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T23:19:04.817000](https://huggingface.co/datasets/open-llm-leaderboard/details_senseable__moe-x33/blob/main/results_2024-01-15T23-19-04.817000.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24904589888958967,
"acc_stderr": 0.030588499686851015,
"acc_norm": 0.24981435362475682,
"acc_norm_stderr": 0.03140336687195182,
"mc1": 0.24357405140758873,
"mc1_stderr": 0.015026354824910782,
"mc2": 0.5114318988938285,
"mc2_stderr": 0.016424037479575066
},
"harness|arc:challenge|25": {
"acc": 0.21160409556313994,
"acc_stderr": 0.01193591635863285,
"acc_norm": 0.2619453924914676,
"acc_norm_stderr": 0.012849054826858117
},
"harness|hellaswag|10": {
"acc": 0.25761800438159727,
"acc_stderr": 0.004364287353415448,
"acc_norm": 0.264389563831906,
"acc_norm_stderr": 0.004401063265803209
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.035914440841969694,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.035914440841969694
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.2236842105263158,
"acc_stderr": 0.03391160934343602,
"acc_norm": 0.2236842105263158,
"acc_norm_stderr": 0.03391160934343602
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2528301886792453,
"acc_stderr": 0.026749899771241235,
"acc_norm": 0.2528301886792453,
"acc_norm_stderr": 0.026749899771241235
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2361111111111111,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.2361111111111111,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.21,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.21,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2543352601156069,
"acc_stderr": 0.0332055644308557,
"acc_norm": 0.2543352601156069,
"acc_norm_stderr": 0.0332055644308557
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2170212765957447,
"acc_stderr": 0.02694748312149622,
"acc_norm": 0.2170212765957447,
"acc_norm_stderr": 0.02694748312149622
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2719298245614035,
"acc_stderr": 0.04185774424022056,
"acc_norm": 0.2719298245614035,
"acc_norm_stderr": 0.04185774424022056
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.23448275862068965,
"acc_stderr": 0.035306258743465914,
"acc_norm": 0.23448275862068965,
"acc_norm_stderr": 0.035306258743465914
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.1984126984126984,
"acc_stderr": 0.02053948126188688,
"acc_norm": 0.1984126984126984,
"acc_norm_stderr": 0.02053948126188688
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.040061680838488746,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.040061680838488746
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.1870967741935484,
"acc_stderr": 0.02218571009225226,
"acc_norm": 0.1870967741935484,
"acc_norm_stderr": 0.02218571009225226
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.1724137931034483,
"acc_stderr": 0.02657767218303658,
"acc_norm": 0.1724137931034483,
"acc_norm_stderr": 0.02657767218303658
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036843,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036843
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.3484848484848485,
"acc_stderr": 0.033948539651564025,
"acc_norm": 0.3484848484848485,
"acc_norm_stderr": 0.033948539651564025
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.35233160621761656,
"acc_stderr": 0.034474782864143586,
"acc_norm": 0.35233160621761656,
"acc_norm_stderr": 0.034474782864143586
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.32564102564102565,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.32564102564102565,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2111111111111111,
"acc_stderr": 0.024882116857655075,
"acc_norm": 0.2111111111111111,
"acc_norm_stderr": 0.024882116857655075
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3319327731092437,
"acc_stderr": 0.030588697013783663,
"acc_norm": 0.3319327731092437,
"acc_norm_stderr": 0.030588697013783663
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2119205298013245,
"acc_stderr": 0.033367670865679766,
"acc_norm": 0.2119205298013245,
"acc_norm_stderr": 0.033367670865679766
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.29724770642201837,
"acc_stderr": 0.01959570722464355,
"acc_norm": 0.29724770642201837,
"acc_norm_stderr": 0.01959570722464355
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.03324708911809117,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.03324708911809117
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.270042194092827,
"acc_stderr": 0.028900721906293426,
"acc_norm": 0.270042194092827,
"acc_norm_stderr": 0.028900721906293426
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.26905829596412556,
"acc_stderr": 0.02976377940687497,
"acc_norm": 0.26905829596412556,
"acc_norm_stderr": 0.02976377940687497
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.31297709923664124,
"acc_stderr": 0.04066962905677697,
"acc_norm": 0.31297709923664124,
"acc_norm_stderr": 0.04066962905677697
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.21487603305785125,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.21487603305785125,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.21296296296296297,
"acc_stderr": 0.03957835471980979,
"acc_norm": 0.21296296296296297,
"acc_norm_stderr": 0.03957835471980979
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.26380368098159507,
"acc_stderr": 0.03462419931615623,
"acc_norm": 0.26380368098159507,
"acc_norm_stderr": 0.03462419931615623
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3125,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.2524271844660194,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.2524271844660194,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2094017094017094,
"acc_stderr": 0.026655699653922744,
"acc_norm": 0.2094017094017094,
"acc_norm_stderr": 0.026655699653922744
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23754789272030652,
"acc_stderr": 0.015218733046150193,
"acc_norm": 0.23754789272030652,
"acc_norm_stderr": 0.015218733046150193
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.1936416184971098,
"acc_stderr": 0.021274230317515543,
"acc_norm": 0.1936416184971098,
"acc_norm_stderr": 0.021274230317515543
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.26143790849673204,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.26143790849673204,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.21543408360128619,
"acc_stderr": 0.02335022547547142,
"acc_norm": 0.21543408360128619,
"acc_norm_stderr": 0.02335022547547142
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2191358024691358,
"acc_stderr": 0.023016705640262203,
"acc_norm": 0.2191358024691358,
"acc_norm_stderr": 0.023016705640262203
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23404255319148937,
"acc_stderr": 0.025257861359432417,
"acc_norm": 0.23404255319148937,
"acc_norm_stderr": 0.025257861359432417
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2457627118644068,
"acc_stderr": 0.010996156635142692,
"acc_norm": 0.2457627118644068,
"acc_norm_stderr": 0.010996156635142692
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.18382352941176472,
"acc_stderr": 0.023529242185193106,
"acc_norm": 0.18382352941176472,
"acc_norm_stderr": 0.023529242185193106
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.01740181671142766,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.01740181671142766
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.20909090909090908,
"acc_stderr": 0.03895091015724137,
"acc_norm": 0.20909090909090908,
"acc_norm_stderr": 0.03895091015724137
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.2653061224489796,
"acc_stderr": 0.028263889943784613,
"acc_norm": 0.2653061224489796,
"acc_norm_stderr": 0.028263889943784613
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23383084577114427,
"acc_stderr": 0.02992941540834838,
"acc_norm": 0.23383084577114427,
"acc_norm_stderr": 0.02992941540834838
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2469879518072289,
"acc_stderr": 0.03357351982064536,
"acc_norm": 0.2469879518072289,
"acc_norm_stderr": 0.03357351982064536
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3157894736842105,
"acc_stderr": 0.035650796707083106,
"acc_norm": 0.3157894736842105,
"acc_norm_stderr": 0.035650796707083106
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24357405140758873,
"mc1_stderr": 0.015026354824910782,
"mc2": 0.5114318988938285,
"mc2_stderr": 0.016424037479575066
},
"harness|winogrande|5": {
"acc": 0.5098658247829518,
"acc_stderr": 0.014049749833367592
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_senseable__moe-x33 | [
"region:us"
] | 2024-01-15T23:21:21+00:00 | {"pretty_name": "Evaluation run of senseable/moe-x33", "dataset_summary": "Dataset automatically created during the evaluation run of model [senseable/moe-x33](https://huggingface.co/senseable/moe-x33) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_senseable__moe-x33\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T23:19:04.817000](https://huggingface.co/datasets/open-llm-leaderboard/details_senseable__moe-x33/blob/main/results_2024-01-15T23-19-04.817000.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24904589888958967,\n \"acc_stderr\": 0.030588499686851015,\n \"acc_norm\": 0.24981435362475682,\n \"acc_norm_stderr\": 0.03140336687195182,\n \"mc1\": 0.24357405140758873,\n \"mc1_stderr\": 0.015026354824910782,\n \"mc2\": 0.5114318988938285,\n \"mc2_stderr\": 0.016424037479575066\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.21160409556313994,\n \"acc_stderr\": 0.01193591635863285,\n \"acc_norm\": 0.2619453924914676,\n \"acc_norm_stderr\": 0.012849054826858117\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.25761800438159727,\n \"acc_stderr\": 0.004364287353415448,\n \"acc_norm\": 0.264389563831906,\n \"acc_norm_stderr\": 0.004401063265803209\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.035914440841969694,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.035914440841969694\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.2236842105263158,\n \"acc_stderr\": 0.03391160934343602,\n \"acc_norm\": 0.2236842105263158,\n \"acc_norm_stderr\": 0.03391160934343602\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2528301886792453,\n \"acc_stderr\": 0.026749899771241235,\n \"acc_norm\": 0.2528301886792453,\n \"acc_norm_stderr\": 0.026749899771241235\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2361111111111111,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.2361111111111111,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2543352601156069,\n \"acc_stderr\": 0.0332055644308557,\n \"acc_norm\": 0.2543352601156069,\n \"acc_norm_stderr\": 0.0332055644308557\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2170212765957447,\n \"acc_stderr\": 0.02694748312149622,\n \"acc_norm\": 0.2170212765957447,\n \"acc_norm_stderr\": 0.02694748312149622\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2719298245614035,\n \"acc_stderr\": 0.04185774424022056,\n \"acc_norm\": 0.2719298245614035,\n \"acc_norm_stderr\": 0.04185774424022056\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.23448275862068965,\n \"acc_stderr\": 0.035306258743465914,\n \"acc_norm\": 0.23448275862068965,\n \"acc_norm_stderr\": 0.035306258743465914\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.1984126984126984,\n \"acc_stderr\": 0.02053948126188688,\n \"acc_norm\": 0.1984126984126984,\n \"acc_norm_stderr\": 0.02053948126188688\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.040061680838488746,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.040061680838488746\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.1870967741935484,\n \"acc_stderr\": 0.02218571009225226,\n \"acc_norm\": 0.1870967741935484,\n \"acc_norm_stderr\": 0.02218571009225226\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.1724137931034483,\n \"acc_stderr\": 0.02657767218303658,\n \"acc_norm\": 0.1724137931034483,\n \"acc_norm_stderr\": 0.02657767218303658\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036843,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036843\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.3484848484848485,\n \"acc_stderr\": 0.033948539651564025,\n \"acc_norm\": 0.3484848484848485,\n \"acc_norm_stderr\": 0.033948539651564025\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.35233160621761656,\n \"acc_stderr\": 0.034474782864143586,\n \"acc_norm\": 0.35233160621761656,\n \"acc_norm_stderr\": 0.034474782864143586\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.32564102564102565,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.32564102564102565,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2111111111111111,\n \"acc_stderr\": 0.024882116857655075,\n \"acc_norm\": 0.2111111111111111,\n \"acc_norm_stderr\": 0.024882116857655075\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.3319327731092437,\n \"acc_stderr\": 0.030588697013783663,\n \"acc_norm\": 0.3319327731092437,\n \"acc_norm_stderr\": 0.030588697013783663\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2119205298013245,\n \"acc_stderr\": 0.033367670865679766,\n \"acc_norm\": 0.2119205298013245,\n \"acc_norm_stderr\": 0.033367670865679766\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.29724770642201837,\n \"acc_stderr\": 0.01959570722464355,\n \"acc_norm\": 0.29724770642201837,\n \"acc_norm_stderr\": 0.01959570722464355\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.03324708911809117,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.03324708911809117\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293426,\n \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293426\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.26905829596412556,\n \"acc_stderr\": 0.02976377940687497,\n \"acc_norm\": 0.26905829596412556,\n \"acc_norm_stderr\": 0.02976377940687497\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.31297709923664124,\n \"acc_stderr\": 0.04066962905677697,\n \"acc_norm\": 0.31297709923664124,\n \"acc_norm_stderr\": 0.04066962905677697\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.21487603305785125,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.21487603305785125,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.21296296296296297,\n \"acc_stderr\": 0.03957835471980979,\n \"acc_norm\": 0.21296296296296297,\n \"acc_norm_stderr\": 0.03957835471980979\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.26380368098159507,\n \"acc_stderr\": 0.03462419931615623,\n \"acc_norm\": 0.26380368098159507,\n \"acc_norm_stderr\": 0.03462419931615623\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.2524271844660194,\n \"acc_stderr\": 0.04301250399690878,\n \"acc_norm\": 0.2524271844660194,\n \"acc_norm_stderr\": 0.04301250399690878\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2094017094017094,\n \"acc_stderr\": 0.026655699653922744,\n \"acc_norm\": 0.2094017094017094,\n \"acc_norm_stderr\": 0.026655699653922744\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23754789272030652,\n \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.23754789272030652,\n \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.1936416184971098,\n \"acc_stderr\": 0.021274230317515543,\n \"acc_norm\": 0.1936416184971098,\n \"acc_norm_stderr\": 0.021274230317515543\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.26143790849673204,\n \"acc_stderr\": 0.025160998214292456,\n \"acc_norm\": 0.26143790849673204,\n \"acc_norm_stderr\": 0.025160998214292456\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.21543408360128619,\n \"acc_stderr\": 0.02335022547547142,\n \"acc_norm\": 0.21543408360128619,\n \"acc_norm_stderr\": 0.02335022547547142\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2191358024691358,\n \"acc_stderr\": 0.023016705640262203,\n \"acc_norm\": 0.2191358024691358,\n \"acc_norm_stderr\": 0.023016705640262203\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.23404255319148937,\n \"acc_stderr\": 0.025257861359432417,\n \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.025257861359432417\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.18382352941176472,\n \"acc_stderr\": 0.023529242185193106,\n \"acc_norm\": 0.18382352941176472,\n \"acc_norm_stderr\": 0.023529242185193106\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.01740181671142766,\n \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.01740181671142766\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.03895091015724137,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.03895091015724137\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.2653061224489796,\n \"acc_stderr\": 0.028263889943784613,\n \"acc_norm\": 0.2653061224489796,\n \"acc_norm_stderr\": 0.028263889943784613\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23383084577114427,\n \"acc_stderr\": 0.02992941540834838,\n \"acc_norm\": 0.23383084577114427,\n \"acc_norm_stderr\": 0.02992941540834838\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2469879518072289,\n \"acc_stderr\": 0.03357351982064536,\n \"acc_norm\": 0.2469879518072289,\n \"acc_norm_stderr\": 0.03357351982064536\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3157894736842105,\n \"acc_stderr\": 0.035650796707083106,\n \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.035650796707083106\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24357405140758873,\n \"mc1_stderr\": 0.015026354824910782,\n \"mc2\": 0.5114318988938285,\n \"mc2_stderr\": 0.016424037479575066\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5098658247829518,\n \"acc_stderr\": 0.014049749833367592\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/senseable/moe-x33", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|arc:challenge|25_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|gsm8k|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hellaswag|10_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T23-19-04.817000.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["**/details_harness|winogrande|5_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T23-19-04.817000.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T23_19_04.817000", "path": ["results_2024-01-15T23-19-04.817000.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T23-19-04.817000.parquet"]}]}]} | 2024-01-15T23:21:42+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of senseable/moe-x33
Dataset automatically created during the evaluation run of model senseable/moe-x33 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T23:19:04.817000(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of senseable/moe-x33\n\n\n\nDataset automatically created during the evaluation run of model senseable/moe-x33 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T23:19:04.817000(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of senseable/moe-x33\n\n\n\nDataset automatically created during the evaluation run of model senseable/moe-x33 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T23:19:04.817000(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
177,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of senseable/moe-x33\n\n\n\nDataset automatically created during the evaluation run of model senseable/moe-x33 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-15T23:19:04.817000(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
55b27419583905ae19147b5c9dab8a7d287fa4f8 |
# Dataset of air_defense_hime/防空棲姫 (Kantai Collection)
This is the dataset of air_defense_hime/防空棲姫 (Kantai Collection), containing 11 images and their tags.
The core tags of this character are `breasts, long_hair, red_eyes, white_hair, hairband, large_breasts, hair_between_eyes, horns, medium_breasts, white_skin, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 11 | 19.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/air_defense_hime_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 11 | 10.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/air_defense_hime_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 31 | 24.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/air_defense_hime_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 11 | 17.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/air_defense_hime_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 31 | 33.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/air_defense_hime_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/air_defense_hime_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | looking_at_viewer, 1girl, abyssal_ship, solo, blush, navel, smile, boots, black_skirt, chain, thigh_strap, underboob |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | 1girl | abyssal_ship | solo | blush | navel | smile | boots | black_skirt | chain | thigh_strap | underboob |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:--------|:---------------|:-------|:--------|:--------|:--------|:--------|:--------------|:--------|:--------------|:------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/air_defense_hime_kantaicollection | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T23:30:06+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T23:32:35+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of air\_defense\_hime/防空棲姫 (Kantai Collection)
======================================================
This is the dataset of air\_defense\_hime/防空棲姫 (Kantai Collection), containing 11 images and their tags.
The core tags of this character are 'breasts, long\_hair, red\_eyes, white\_hair, hairband, large\_breasts, hair\_between\_eyes, horns, medium\_breasts, white\_skin, very\_long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
44,
61,
5,
4
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
5331ecdc6117b352659ffd63b1f8b2c92436acc6 | # Dataset Card for "cai-conversation-dev1705363150"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vwxyzjn/cai-conversation-dev1705363150 | [
"region:us"
] | 2024-01-16T00:02:31+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "init_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "init_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "critic_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "critic_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "revision_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "revision_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_sft", "num_bytes": 197105, "num_examples": 64}, {"name": "train_prefs", "num_bytes": 195767, "num_examples": 64}], "download_size": 219102, "dataset_size": 392872}, "configs": [{"config_name": "default", "data_files": [{"split": "train_sft", "path": "data/train_sft-*"}, {"split": "train_prefs", "path": "data/train_prefs-*"}]}]} | 2024-01-16T00:02:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "cai-conversation-dev1705363150"
More Information needed | [
"# Dataset Card for \"cai-conversation-dev1705363150\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"cai-conversation-dev1705363150\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"cai-conversation-dev1705363150\"\n\nMore Information needed"
] |
b29a9c4a6c3da9e8c5737bfa2e27636525e3652d |
# Dataset Card for Evaluation run of Kquant03/FrankenDPO-4x7B-bf16
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Kquant03/FrankenDPO-4x7B-bf16](https://huggingface.co/Kquant03/FrankenDPO-4x7B-bf16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Kquant03__FrankenDPO-4x7B-bf16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-16T00:01:09.381947](https://huggingface.co/datasets/open-llm-leaderboard/details_Kquant03__FrankenDPO-4x7B-bf16/blob/main/results_2024-01-16T00-01-09.381947.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6534860321315697,
"acc_stderr": 0.032050646157542155,
"acc_norm": 0.6534840897708614,
"acc_norm_stderr": 0.03271653944446639,
"mc1": 0.4773561811505508,
"mc1_stderr": 0.01748554225848965,
"mc2": 0.63138491036624,
"mc2_stderr": 0.015420304563984515
},
"harness|arc:challenge|25": {
"acc": 0.6561433447098977,
"acc_stderr": 0.013880644570156215,
"acc_norm": 0.6868600682593856,
"acc_norm_stderr": 0.013552671543623492
},
"harness|hellaswag|10": {
"acc": 0.6864170483967337,
"acc_stderr": 0.00463000829392563,
"acc_norm": 0.8606851224855606,
"acc_norm_stderr": 0.003455671196993115
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.028152837942493864,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.028152837942493864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542126,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542126
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5957446808510638,
"acc_stderr": 0.03208115750788684,
"acc_norm": 0.5957446808510638,
"acc_norm_stderr": 0.03208115750788684
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.02535574126305527,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.02535574126305527
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723302,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723302
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6692307692307692,
"acc_stderr": 0.023854795680971125,
"acc_norm": 0.6692307692307692,
"acc_norm_stderr": 0.023854795680971125
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616248,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616248
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.029953823891887037,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.029953823891887037
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.02584501798692692,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.02584501798692692
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165616,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165616
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8352490421455939,
"acc_stderr": 0.013265346261323788,
"acc_norm": 0.8352490421455939,
"acc_norm_stderr": 0.013265346261323788
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.023703099525258172,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.023703099525258172
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.45027932960893857,
"acc_stderr": 0.016639615236845814,
"acc_norm": 0.45027932960893857,
"acc_norm_stderr": 0.016639615236845814
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242557,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242557
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.02558306248998481,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.02558306248998481
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7623456790123457,
"acc_stderr": 0.023683591837008557,
"acc_norm": 0.7623456790123457,
"acc_norm_stderr": 0.023683591837008557
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4784876140808344,
"acc_stderr": 0.01275841094103892,
"acc_norm": 0.4784876140808344,
"acc_norm_stderr": 0.01275841094103892
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462937,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462937
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.018926082916083376,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.018926082916083376
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.746938775510204,
"acc_stderr": 0.027833023871399677,
"acc_norm": 0.746938775510204,
"acc_norm_stderr": 0.027833023871399677
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578323,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578323
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4773561811505508,
"mc1_stderr": 0.01748554225848965,
"mc2": 0.63138491036624,
"mc2_stderr": 0.015420304563984515
},
"harness|winogrande|5": {
"acc": 0.835043409629045,
"acc_stderr": 0.010430917468237431
},
"harness|gsm8k|5": {
"acc": 0.6770280515542078,
"acc_stderr": 0.012880360794851822
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Kquant03__FrankenDPO-4x7B-bf16 | [
"region:us"
] | 2024-01-16T00:03:26+00:00 | {"pretty_name": "Evaluation run of Kquant03/FrankenDPO-4x7B-bf16", "dataset_summary": "Dataset automatically created during the evaluation run of model [Kquant03/FrankenDPO-4x7B-bf16](https://huggingface.co/Kquant03/FrankenDPO-4x7B-bf16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Kquant03__FrankenDPO-4x7B-bf16\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-16T00:01:09.381947](https://huggingface.co/datasets/open-llm-leaderboard/details_Kquant03__FrankenDPO-4x7B-bf16/blob/main/results_2024-01-16T00-01-09.381947.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6534860321315697,\n \"acc_stderr\": 0.032050646157542155,\n \"acc_norm\": 0.6534840897708614,\n \"acc_norm_stderr\": 0.03271653944446639,\n \"mc1\": 0.4773561811505508,\n \"mc1_stderr\": 0.01748554225848965,\n \"mc2\": 0.63138491036624,\n \"mc2_stderr\": 0.015420304563984515\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6561433447098977,\n \"acc_stderr\": 0.013880644570156215,\n \"acc_norm\": 0.6868600682593856,\n \"acc_norm_stderr\": 0.013552671543623492\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6864170483967337,\n \"acc_stderr\": 0.00463000829392563,\n \"acc_norm\": 0.8606851224855606,\n \"acc_norm_stderr\": 0.003455671196993115\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493864,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493864\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542126,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542126\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5957446808510638,\n \"acc_stderr\": 0.03208115750788684,\n \"acc_norm\": 0.5957446808510638,\n \"acc_norm_stderr\": 0.03208115750788684\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.02535574126305527,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.02535574126305527\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723302,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723302\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.023854795680971125,\n \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.023854795680971125\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616248,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616248\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.029953823891887037,\n \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.029953823891887037\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5138888888888888,\n \"acc_stderr\": 0.03408655867977749,\n \"acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.03408655867977749\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8382352941176471,\n \"acc_stderr\": 0.02584501798692692,\n \"acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.02584501798692692\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8352490421455939,\n \"acc_stderr\": 0.013265346261323788,\n \"acc_norm\": 0.8352490421455939,\n \"acc_norm_stderr\": 0.013265346261323788\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.023703099525258172,\n \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.023703099525258172\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.45027932960893857,\n \"acc_stderr\": 0.016639615236845814,\n \"acc_norm\": 0.45027932960893857,\n \"acc_norm_stderr\": 0.016639615236845814\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.02558306248998481,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.02558306248998481\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7623456790123457,\n \"acc_stderr\": 0.023683591837008557,\n \"acc_norm\": 0.7623456790123457,\n \"acc_norm_stderr\": 0.023683591837008557\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4784876140808344,\n \"acc_stderr\": 0.01275841094103892,\n \"acc_norm\": 0.4784876140808344,\n \"acc_norm_stderr\": 0.01275841094103892\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462937,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462937\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.018926082916083376,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.018926082916083376\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.746938775510204,\n \"acc_stderr\": 0.027833023871399677,\n \"acc_norm\": 0.746938775510204,\n \"acc_norm_stderr\": 0.027833023871399677\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.025538433368578323,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.025538433368578323\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4773561811505508,\n \"mc1_stderr\": 0.01748554225848965,\n \"mc2\": 0.63138491036624,\n \"mc2_stderr\": 0.015420304563984515\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.835043409629045,\n \"acc_stderr\": 0.010430917468237431\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6770280515542078,\n \"acc_stderr\": 0.012880360794851822\n }\n}\n```", "repo_url": "https://huggingface.co/Kquant03/FrankenDPO-4x7B-bf16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|arc:challenge|25_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|gsm8k|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hellaswag|10_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T00-01-09.381947.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["**/details_harness|winogrande|5_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-16T00-01-09.381947.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_16T00_01_09.381947", "path": ["results_2024-01-16T00-01-09.381947.parquet"]}, {"split": "latest", "path": ["results_2024-01-16T00-01-09.381947.parquet"]}]}]} | 2024-01-16T00:03:47+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Kquant03/FrankenDPO-4x7B-bf16
Dataset automatically created during the evaluation run of model Kquant03/FrankenDPO-4x7B-bf16 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-16T00:01:09.381947(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Kquant03/FrankenDPO-4x7B-bf16\n\n\n\nDataset automatically created during the evaluation run of model Kquant03/FrankenDPO-4x7B-bf16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T00:01:09.381947(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Kquant03/FrankenDPO-4x7B-bf16\n\n\n\nDataset automatically created during the evaluation run of model Kquant03/FrankenDPO-4x7B-bf16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T00:01:09.381947(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
191,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Kquant03/FrankenDPO-4x7B-bf16\n\n\n\nDataset automatically created during the evaluation run of model Kquant03/FrankenDPO-4x7B-bf16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-16T00:01:09.381947(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]"
] |
736c4ae99bd770987a56aa1cd6a5ff936839998a |
# Dataset Card for Evaluation run of Swisslex/Mixtral-8x7b-DPO-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Swisslex/Mixtral-8x7b-DPO-v0.1](https://huggingface.co/Swisslex/Mixtral-8x7b-DPO-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Swisslex__Mixtral-8x7b-DPO-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-16T00:07:22.506947](https://huggingface.co/datasets/open-llm-leaderboard/details_Swisslex__Mixtral-8x7b-DPO-v0.1/blob/main/results_2024-01-16T00-07-22.506947.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7049846845762283,
"acc_stderr": 0.03051139695922859,
"acc_norm": 0.7095501952770573,
"acc_norm_stderr": 0.03110231839596143,
"mc1": 0.42472460220318237,
"mc1_stderr": 0.01730400095716748,
"mc2": 0.5738038140031454,
"mc2_stderr": 0.015194846416368368
},
"harness|arc:challenge|25": {
"acc": 0.6783276450511946,
"acc_stderr": 0.013650488084494162,
"acc_norm": 0.7090443686006825,
"acc_norm_stderr": 0.013273077865907588
},
"harness|hellaswag|10": {
"acc": 0.6859191396136228,
"acc_stderr": 0.0046320017323329835,
"acc_norm": 0.8761202947619996,
"acc_norm_stderr": 0.003287709741128806
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6592592592592592,
"acc_stderr": 0.040943762699967926,
"acc_norm": 0.6592592592592592,
"acc_norm_stderr": 0.040943762699967926
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7960526315789473,
"acc_stderr": 0.0327900040631005,
"acc_norm": 0.7960526315789473,
"acc_norm_stderr": 0.0327900040631005
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7886792452830189,
"acc_stderr": 0.025125766484827845,
"acc_norm": 0.7886792452830189,
"acc_norm_stderr": 0.025125766484827845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8194444444444444,
"acc_stderr": 0.032166008088022675,
"acc_norm": 0.8194444444444444,
"acc_norm_stderr": 0.032166008088022675
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.03533133389323657,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.03533133389323657
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.49019607843137253,
"acc_stderr": 0.04974229460422817,
"acc_norm": 0.49019607843137253,
"acc_norm_stderr": 0.04974229460422817
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6680851063829787,
"acc_stderr": 0.030783736757745643,
"acc_norm": 0.6680851063829787,
"acc_norm_stderr": 0.030783736757745643
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.04434600701584925,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.04434600701584925
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6482758620689655,
"acc_stderr": 0.0397923663749741,
"acc_norm": 0.6482758620689655,
"acc_norm_stderr": 0.0397923663749741
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4497354497354497,
"acc_stderr": 0.02562085704293665,
"acc_norm": 0.4497354497354497,
"acc_norm_stderr": 0.02562085704293665
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5793650793650794,
"acc_stderr": 0.04415438226743745,
"acc_norm": 0.5793650793650794,
"acc_norm_stderr": 0.04415438226743745
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8225806451612904,
"acc_stderr": 0.021732540689329286,
"acc_norm": 0.8225806451612904,
"acc_norm_stderr": 0.021732540689329286
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6206896551724138,
"acc_stderr": 0.034139638059062345,
"acc_norm": 0.6206896551724138,
"acc_norm_stderr": 0.034139638059062345
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8121212121212121,
"acc_stderr": 0.03050193405942914,
"acc_norm": 0.8121212121212121,
"acc_norm_stderr": 0.03050193405942914
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.02406315641682252,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.02406315641682252
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9378238341968912,
"acc_stderr": 0.017426974154240528,
"acc_norm": 0.9378238341968912,
"acc_norm_stderr": 0.017426974154240528
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6846153846153846,
"acc_stderr": 0.02355964698318995,
"acc_norm": 0.6846153846153846,
"acc_norm_stderr": 0.02355964698318995
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.40370370370370373,
"acc_stderr": 0.029914812342227627,
"acc_norm": 0.40370370370370373,
"acc_norm_stderr": 0.029914812342227627
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8025210084033614,
"acc_stderr": 0.02585916412205147,
"acc_norm": 0.8025210084033614,
"acc_norm_stderr": 0.02585916412205147
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.44370860927152317,
"acc_stderr": 0.040565279022817306,
"acc_norm": 0.44370860927152317,
"acc_norm_stderr": 0.040565279022817306
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8770642201834863,
"acc_stderr": 0.014078467983673376,
"acc_norm": 0.8770642201834863,
"acc_norm_stderr": 0.014078467983673376
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6157407407407407,
"acc_stderr": 0.03317354514310742,
"acc_norm": 0.6157407407407407,
"acc_norm_stderr": 0.03317354514310742
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.02485747808025046,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.02485747808025046
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8649789029535865,
"acc_stderr": 0.022245776632003694,
"acc_norm": 0.8649789029535865,
"acc_norm_stderr": 0.022245776632003694
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7668161434977578,
"acc_stderr": 0.028380391147094702,
"acc_norm": 0.7668161434977578,
"acc_norm_stderr": 0.028380391147094702
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8244274809160306,
"acc_stderr": 0.03336820338476076,
"acc_norm": 0.8244274809160306,
"acc_norm_stderr": 0.03336820338476076
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8429752066115702,
"acc_stderr": 0.03321244842547128,
"acc_norm": 0.8429752066115702,
"acc_norm_stderr": 0.03321244842547128
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.036809181416738807,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.036809181416738807
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.803680981595092,
"acc_stderr": 0.031207970394709225,
"acc_norm": 0.803680981595092,
"acc_norm_stderr": 0.031207970394709225
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.03586594738573974,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.03586594738573974
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9145299145299145,
"acc_stderr": 0.018315891685625852,
"acc_norm": 0.9145299145299145,
"acc_norm_stderr": 0.018315891685625852
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8748403575989783,
"acc_stderr": 0.011832954239305733,
"acc_norm": 0.8748403575989783,
"acc_norm_stderr": 0.011832954239305733
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7947976878612717,
"acc_stderr": 0.021742519835276277,
"acc_norm": 0.7947976878612717,
"acc_norm_stderr": 0.021742519835276277
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.40782122905027934,
"acc_stderr": 0.016435865260914742,
"acc_norm": 0.40782122905027934,
"acc_norm_stderr": 0.016435865260914742
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.02355083135199509,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.02355083135199509
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8006430868167203,
"acc_stderr": 0.022691033780549656,
"acc_norm": 0.8006430868167203,
"acc_norm_stderr": 0.022691033780549656
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8302469135802469,
"acc_stderr": 0.02088869041409387,
"acc_norm": 0.8302469135802469,
"acc_norm_stderr": 0.02088869041409387
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.02965823509766691,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.02965823509766691
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5319426336375489,
"acc_stderr": 0.012744149704869647,
"acc_norm": 0.5319426336375489,
"acc_norm_stderr": 0.012744149704869647
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7904411764705882,
"acc_stderr": 0.02472311040767707,
"acc_norm": 0.7904411764705882,
"acc_norm_stderr": 0.02472311040767707
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.017077373377856923,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.017077373377856923
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7181818181818181,
"acc_stderr": 0.04309118709946458,
"acc_norm": 0.7181818181818181,
"acc_norm_stderr": 0.04309118709946458
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.024112678240900798,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.024112678240900798
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826369,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826369
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.9005847953216374,
"acc_stderr": 0.022949025579355044,
"acc_norm": 0.9005847953216374,
"acc_norm_stderr": 0.022949025579355044
},
"harness|truthfulqa:mc|0": {
"mc1": 0.42472460220318237,
"mc1_stderr": 0.01730400095716748,
"mc2": 0.5738038140031454,
"mc2_stderr": 0.015194846416368368
},
"harness|winogrande|5": {
"acc": 0.823993685872139,
"acc_stderr": 0.010703090882320708
},
"harness|gsm8k|5": {
"acc": 0.5375284306292646,
"acc_stderr": 0.013733636059107757
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Swisslex__Mixtral-8x7b-DPO-v0.1 | [
"region:us"
] | 2024-01-16T00:09:40+00:00 | {"pretty_name": "Evaluation run of Swisslex/Mixtral-8x7b-DPO-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [Swisslex/Mixtral-8x7b-DPO-v0.1](https://huggingface.co/Swisslex/Mixtral-8x7b-DPO-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Swisslex__Mixtral-8x7b-DPO-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-16T00:07:22.506947](https://huggingface.co/datasets/open-llm-leaderboard/details_Swisslex__Mixtral-8x7b-DPO-v0.1/blob/main/results_2024-01-16T00-07-22.506947.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7049846845762283,\n \"acc_stderr\": 0.03051139695922859,\n \"acc_norm\": 0.7095501952770573,\n \"acc_norm_stderr\": 0.03110231839596143,\n \"mc1\": 0.42472460220318237,\n \"mc1_stderr\": 0.01730400095716748,\n \"mc2\": 0.5738038140031454,\n \"mc2_stderr\": 0.015194846416368368\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6783276450511946,\n \"acc_stderr\": 0.013650488084494162,\n \"acc_norm\": 0.7090443686006825,\n \"acc_norm_stderr\": 0.013273077865907588\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6859191396136228,\n \"acc_stderr\": 0.0046320017323329835,\n \"acc_norm\": 0.8761202947619996,\n \"acc_norm_stderr\": 0.003287709741128806\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6592592592592592,\n \"acc_stderr\": 0.040943762699967926,\n \"acc_norm\": 0.6592592592592592,\n \"acc_norm_stderr\": 0.040943762699967926\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7960526315789473,\n \"acc_stderr\": 0.0327900040631005,\n \"acc_norm\": 0.7960526315789473,\n \"acc_norm_stderr\": 0.0327900040631005\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7886792452830189,\n \"acc_stderr\": 0.025125766484827845,\n \"acc_norm\": 0.7886792452830189,\n \"acc_norm_stderr\": 0.025125766484827845\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8194444444444444,\n \"acc_stderr\": 0.032166008088022675,\n \"acc_norm\": 0.8194444444444444,\n \"acc_norm_stderr\": 0.032166008088022675\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.03533133389323657,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.03533133389323657\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.49019607843137253,\n \"acc_stderr\": 0.04974229460422817,\n \"acc_norm\": 0.49019607843137253,\n \"acc_norm_stderr\": 0.04974229460422817\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6680851063829787,\n \"acc_stderr\": 0.030783736757745643,\n \"acc_norm\": 0.6680851063829787,\n \"acc_norm_stderr\": 0.030783736757745643\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04434600701584925,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04434600701584925\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6482758620689655,\n \"acc_stderr\": 0.0397923663749741,\n \"acc_norm\": 0.6482758620689655,\n \"acc_norm_stderr\": 0.0397923663749741\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4497354497354497,\n \"acc_stderr\": 0.02562085704293665,\n \"acc_norm\": 0.4497354497354497,\n \"acc_norm_stderr\": 0.02562085704293665\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5793650793650794,\n \"acc_stderr\": 0.04415438226743745,\n \"acc_norm\": 0.5793650793650794,\n \"acc_norm_stderr\": 0.04415438226743745\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8225806451612904,\n \"acc_stderr\": 0.021732540689329286,\n \"acc_norm\": 0.8225806451612904,\n \"acc_norm_stderr\": 0.021732540689329286\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6206896551724138,\n \"acc_stderr\": 0.034139638059062345,\n \"acc_norm\": 0.6206896551724138,\n \"acc_norm_stderr\": 0.034139638059062345\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8121212121212121,\n \"acc_stderr\": 0.03050193405942914,\n \"acc_norm\": 0.8121212121212121,\n \"acc_norm_stderr\": 0.03050193405942914\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.02406315641682252,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.02406315641682252\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9378238341968912,\n \"acc_stderr\": 0.017426974154240528,\n \"acc_norm\": 0.9378238341968912,\n \"acc_norm_stderr\": 0.017426974154240528\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6846153846153846,\n \"acc_stderr\": 0.02355964698318995,\n \"acc_norm\": 0.6846153846153846,\n \"acc_norm_stderr\": 0.02355964698318995\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.40370370370370373,\n \"acc_stderr\": 0.029914812342227627,\n \"acc_norm\": 0.40370370370370373,\n \"acc_norm_stderr\": 0.029914812342227627\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8025210084033614,\n \"acc_stderr\": 0.02585916412205147,\n \"acc_norm\": 0.8025210084033614,\n \"acc_norm_stderr\": 0.02585916412205147\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.44370860927152317,\n \"acc_stderr\": 0.040565279022817306,\n \"acc_norm\": 0.44370860927152317,\n \"acc_norm_stderr\": 0.040565279022817306\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8770642201834863,\n \"acc_stderr\": 0.014078467983673376,\n \"acc_norm\": 0.8770642201834863,\n \"acc_norm_stderr\": 0.014078467983673376\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6157407407407407,\n \"acc_stderr\": 0.03317354514310742,\n \"acc_norm\": 0.6157407407407407,\n \"acc_norm_stderr\": 0.03317354514310742\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.02485747808025046,\n \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.02485747808025046\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8649789029535865,\n \"acc_stderr\": 0.022245776632003694,\n \"acc_norm\": 0.8649789029535865,\n \"acc_norm_stderr\": 0.022245776632003694\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7668161434977578,\n \"acc_stderr\": 0.028380391147094702,\n \"acc_norm\": 0.7668161434977578,\n \"acc_norm_stderr\": 0.028380391147094702\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8244274809160306,\n \"acc_stderr\": 0.03336820338476076,\n \"acc_norm\": 0.8244274809160306,\n \"acc_norm_stderr\": 0.03336820338476076\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8429752066115702,\n \"acc_stderr\": 0.03321244842547128,\n \"acc_norm\": 0.8429752066115702,\n \"acc_norm_stderr\": 0.03321244842547128\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709225,\n \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709225\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.03586594738573974,\n \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.03586594738573974\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n \"acc_stderr\": 0.018315891685625852,\n \"acc_norm\": 0.9145299145299145,\n \"acc_norm_stderr\": 0.018315891685625852\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8748403575989783,\n \"acc_stderr\": 0.011832954239305733,\n \"acc_norm\": 0.8748403575989783,\n \"acc_norm_stderr\": 0.011832954239305733\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7947976878612717,\n \"acc_stderr\": 0.021742519835276277,\n \"acc_norm\": 0.7947976878612717,\n \"acc_norm_stderr\": 0.021742519835276277\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.40782122905027934,\n \"acc_stderr\": 0.016435865260914742,\n \"acc_norm\": 0.40782122905027934,\n \"acc_norm_stderr\": 0.016435865260914742\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.02355083135199509,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.02355083135199509\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8006430868167203,\n \"acc_stderr\": 0.022691033780549656,\n \"acc_norm\": 0.8006430868167203,\n \"acc_norm_stderr\": 0.022691033780549656\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8302469135802469,\n \"acc_stderr\": 0.02088869041409387,\n \"acc_norm\": 0.8302469135802469,\n \"acc_norm_stderr\": 0.02088869041409387\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.02965823509766691,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.02965823509766691\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5319426336375489,\n \"acc_stderr\": 0.012744149704869647,\n \"acc_norm\": 0.5319426336375489,\n \"acc_norm_stderr\": 0.012744149704869647\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7904411764705882,\n \"acc_stderr\": 0.02472311040767707,\n \"acc_norm\": 0.7904411764705882,\n \"acc_norm_stderr\": 0.02472311040767707\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7679738562091504,\n \"acc_stderr\": 0.017077373377856923,\n \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.017077373377856923\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.04309118709946458,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.04309118709946458\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n \"acc_stderr\": 0.024112678240900798,\n \"acc_norm\": 0.8656716417910447,\n \"acc_norm_stderr\": 0.024112678240900798\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826369,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826369\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.9005847953216374,\n \"acc_stderr\": 0.022949025579355044,\n \"acc_norm\": 0.9005847953216374,\n \"acc_norm_stderr\": 0.022949025579355044\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.42472460220318237,\n \"mc1_stderr\": 0.01730400095716748,\n \"mc2\": 0.5738038140031454,\n \"mc2_stderr\": 0.015194846416368368\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.823993685872139,\n \"acc_stderr\": 0.010703090882320708\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5375284306292646,\n \"acc_stderr\": 0.013733636059107757\n }\n}\n```", "repo_url": "https://huggingface.co/Swisslex/Mixtral-8x7b-DPO-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|arc:challenge|25_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|gsm8k|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hellaswag|10_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T00-07-22.506947.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["**/details_harness|winogrande|5_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-16T00-07-22.506947.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_16T00_07_22.506947", "path": ["results_2024-01-16T00-07-22.506947.parquet"]}, {"split": "latest", "path": ["results_2024-01-16T00-07-22.506947.parquet"]}]}]} | 2024-01-16T00:10:07+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Swisslex/Mixtral-8x7b-DPO-v0.1
Dataset automatically created during the evaluation run of model Swisslex/Mixtral-8x7b-DPO-v0.1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-16T00:07:22.506947(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Swisslex/Mixtral-8x7b-DPO-v0.1\n\n\n\nDataset automatically created during the evaluation run of model Swisslex/Mixtral-8x7b-DPO-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T00:07:22.506947(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Swisslex/Mixtral-8x7b-DPO-v0.1\n\n\n\nDataset automatically created during the evaluation run of model Swisslex/Mixtral-8x7b-DPO-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T00:07:22.506947(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
193,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Swisslex/Mixtral-8x7b-DPO-v0.1\n\n\n\nDataset automatically created during the evaluation run of model Swisslex/Mixtral-8x7b-DPO-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-16T00:07:22.506947(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]"
] |
e33e86c660efaeb44d2f076454462a8c45b1fc74 |
# Dataset Card for Evaluation run of mavihsrr/GetCode-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mavihsrr/GetCode-slerp](https://huggingface.co/mavihsrr/GetCode-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mavihsrr__GetCode-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-16T00:21:39.795318](https://huggingface.co/datasets/open-llm-leaderboard/details_mavihsrr__GetCode-slerp/blob/main/results_2024-01-16T00-21-39.795318.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.23210164902672914,
"acc_stderr": 0.029927744191797015,
"acc_norm": 0.23227062322049502,
"acc_norm_stderr": 0.030723532683480458,
"mc1": 0.24357405140758873,
"mc1_stderr": 0.015026354824910782,
"mc2": 0.4977872683391919,
"mc2_stderr": 0.016733548639246566
},
"harness|arc:challenge|25": {
"acc": 0.20392491467576793,
"acc_stderr": 0.011774262478702247,
"acc_norm": 0.26535836177474403,
"acc_norm_stderr": 0.012902554762313964
},
"harness|hellaswag|10": {
"acc": 0.2599083847839076,
"acc_stderr": 0.00437687761923412,
"acc_norm": 0.26199960167297354,
"acc_norm_stderr": 0.004388237557526723
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.18518518518518517,
"acc_stderr": 0.03355677216313142,
"acc_norm": 0.18518518518518517,
"acc_norm_stderr": 0.03355677216313142
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21509433962264152,
"acc_stderr": 0.02528839450289137,
"acc_norm": 0.21509433962264152,
"acc_norm_stderr": 0.02528839450289137
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.20809248554913296,
"acc_stderr": 0.030952890217749874,
"acc_norm": 0.20809248554913296,
"acc_norm_stderr": 0.030952890217749874
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.26382978723404255,
"acc_stderr": 0.028809989854102973,
"acc_norm": 0.26382978723404255,
"acc_norm_stderr": 0.028809989854102973
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813365,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813365
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.20899470899470898,
"acc_stderr": 0.02094048156533486,
"acc_norm": 0.20899470899470898,
"acc_norm_stderr": 0.02094048156533486
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.04040610178208841,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.04040610178208841
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.1774193548387097,
"acc_stderr": 0.02173254068932927,
"acc_norm": 0.1774193548387097,
"acc_norm_stderr": 0.02173254068932927
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.15270935960591134,
"acc_stderr": 0.02530890453938063,
"acc_norm": 0.15270935960591134,
"acc_norm_stderr": 0.02530890453938063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.17676767676767677,
"acc_stderr": 0.027178752639044915,
"acc_norm": 0.17676767676767677,
"acc_norm_stderr": 0.027178752639044915
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.028697873971860664,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.028697873971860664
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.20256410256410257,
"acc_stderr": 0.020377660970371372,
"acc_norm": 0.20256410256410257,
"acc_norm_stderr": 0.020377660970371372
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2111111111111111,
"acc_stderr": 0.024882116857655075,
"acc_norm": 0.2111111111111111,
"acc_norm_stderr": 0.024882116857655075
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.1986754966887417,
"acc_stderr": 0.03257847384436776,
"acc_norm": 0.1986754966887417,
"acc_norm_stderr": 0.03257847384436776
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.1926605504587156,
"acc_stderr": 0.016909276884936094,
"acc_norm": 0.1926605504587156,
"acc_norm_stderr": 0.016909276884936094
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1527777777777778,
"acc_stderr": 0.024536326026134224,
"acc_norm": 0.1527777777777778,
"acc_norm_stderr": 0.024536326026134224
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.270042194092827,
"acc_stderr": 0.028900721906293426,
"acc_norm": 0.270042194092827,
"acc_norm_stderr": 0.028900721906293426
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.31390134529147984,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.31390134529147984,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2396694214876033,
"acc_stderr": 0.03896878985070417,
"acc_norm": 0.2396694214876033,
"acc_norm_stderr": 0.03896878985070417
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22085889570552147,
"acc_stderr": 0.032591773927421776,
"acc_norm": 0.22085889570552147,
"acc_norm_stderr": 0.032591773927421776
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3125,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2905982905982906,
"acc_stderr": 0.02974504857267404,
"acc_norm": 0.2905982905982906,
"acc_norm_stderr": 0.02974504857267404
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23754789272030652,
"acc_stderr": 0.015218733046150193,
"acc_norm": 0.23754789272030652,
"acc_norm_stderr": 0.015218733046150193
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.023929155517351284,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.023929155517351284
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.1864951768488746,
"acc_stderr": 0.02212243977248077,
"acc_norm": 0.1864951768488746,
"acc_norm_stderr": 0.02212243977248077
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.21604938271604937,
"acc_stderr": 0.022899162918445806,
"acc_norm": 0.21604938271604937,
"acc_norm_stderr": 0.022899162918445806
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23404255319148937,
"acc_stderr": 0.025257861359432417,
"acc_norm": 0.23404255319148937,
"acc_norm_stderr": 0.025257861359432417
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2457627118644068,
"acc_stderr": 0.010996156635142692,
"acc_norm": 0.2457627118644068,
"acc_norm_stderr": 0.010996156635142692
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.18382352941176472,
"acc_stderr": 0.023529242185193106,
"acc_norm": 0.18382352941176472,
"acc_norm_stderr": 0.023529242185193106
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.25,
"acc_stderr": 0.01751781884501444,
"acc_norm": 0.25,
"acc_norm_stderr": 0.01751781884501444
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.18775510204081633,
"acc_stderr": 0.02500025603954621,
"acc_norm": 0.18775510204081633,
"acc_norm_stderr": 0.02500025603954621
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.03036049015401465,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.03036049015401465
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.28313253012048195,
"acc_stderr": 0.03507295431370518,
"acc_norm": 0.28313253012048195,
"acc_norm_stderr": 0.03507295431370518
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3216374269005848,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.3216374269005848,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24357405140758873,
"mc1_stderr": 0.015026354824910782,
"mc2": 0.4977872683391919,
"mc2_stderr": 0.016733548639246566
},
"harness|winogrande|5": {
"acc": 0.5177584846093133,
"acc_stderr": 0.014043619596174964
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_mavihsrr__GetCode-slerp | [
"region:us"
] | 2024-01-16T00:23:58+00:00 | {"pretty_name": "Evaluation run of mavihsrr/GetCode-slerp", "dataset_summary": "Dataset automatically created during the evaluation run of model [mavihsrr/GetCode-slerp](https://huggingface.co/mavihsrr/GetCode-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mavihsrr__GetCode-slerp\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-16T00:21:39.795318](https://huggingface.co/datasets/open-llm-leaderboard/details_mavihsrr__GetCode-slerp/blob/main/results_2024-01-16T00-21-39.795318.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.23210164902672914,\n \"acc_stderr\": 0.029927744191797015,\n \"acc_norm\": 0.23227062322049502,\n \"acc_norm_stderr\": 0.030723532683480458,\n \"mc1\": 0.24357405140758873,\n \"mc1_stderr\": 0.015026354824910782,\n \"mc2\": 0.4977872683391919,\n \"mc2_stderr\": 0.016733548639246566\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.20392491467576793,\n \"acc_stderr\": 0.011774262478702247,\n \"acc_norm\": 0.26535836177474403,\n \"acc_norm_stderr\": 0.012902554762313964\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2599083847839076,\n \"acc_stderr\": 0.00437687761923412,\n \"acc_norm\": 0.26199960167297354,\n \"acc_norm_stderr\": 0.004388237557526723\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.18518518518518517,\n \"acc_stderr\": 0.03355677216313142,\n \"acc_norm\": 0.18518518518518517,\n \"acc_norm_stderr\": 0.03355677216313142\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.21509433962264152,\n \"acc_stderr\": 0.02528839450289137,\n \"acc_norm\": 0.21509433962264152,\n \"acc_norm_stderr\": 0.02528839450289137\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.20809248554913296,\n \"acc_stderr\": 0.030952890217749874,\n \"acc_norm\": 0.20809248554913296,\n \"acc_norm_stderr\": 0.030952890217749874\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.20899470899470898,\n \"acc_stderr\": 0.02094048156533486,\n \"acc_norm\": 0.20899470899470898,\n \"acc_norm_stderr\": 0.02094048156533486\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.04040610178208841,\n \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.04040610178208841\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.1774193548387097,\n \"acc_stderr\": 0.02173254068932927,\n \"acc_norm\": 0.1774193548387097,\n \"acc_norm_stderr\": 0.02173254068932927\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.15270935960591134,\n \"acc_stderr\": 0.02530890453938063,\n \"acc_norm\": 0.15270935960591134,\n \"acc_norm_stderr\": 0.02530890453938063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.17676767676767677,\n \"acc_stderr\": 0.027178752639044915,\n \"acc_norm\": 0.17676767676767677,\n \"acc_norm_stderr\": 0.027178752639044915\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.20256410256410257,\n \"acc_stderr\": 0.020377660970371372,\n \"acc_norm\": 0.20256410256410257,\n \"acc_norm_stderr\": 0.020377660970371372\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2111111111111111,\n \"acc_stderr\": 0.024882116857655075,\n \"acc_norm\": 0.2111111111111111,\n \"acc_norm_stderr\": 0.024882116857655075\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436776,\n \"acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436776\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.1926605504587156,\n \"acc_stderr\": 0.016909276884936094,\n \"acc_norm\": 0.1926605504587156,\n \"acc_norm_stderr\": 0.016909276884936094\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.1527777777777778,\n \"acc_stderr\": 0.024536326026134224,\n \"acc_norm\": 0.1527777777777778,\n \"acc_norm_stderr\": 0.024536326026134224\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293426,\n \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293426\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.31390134529147984,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.31390134529147984,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.2396694214876033,\n \"acc_stderr\": 0.03896878985070417,\n \"acc_norm\": 0.2396694214876033,\n \"acc_norm_stderr\": 0.03896878985070417\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.22085889570552147,\n \"acc_stderr\": 0.032591773927421776,\n \"acc_norm\": 0.22085889570552147,\n \"acc_norm_stderr\": 0.032591773927421776\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2905982905982906,\n \"acc_stderr\": 0.02974504857267404,\n \"acc_norm\": 0.2905982905982906,\n \"acc_norm_stderr\": 0.02974504857267404\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23754789272030652,\n \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.23754789272030652,\n \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.023929155517351284,\n \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.023929155517351284\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.1864951768488746,\n \"acc_stderr\": 0.02212243977248077,\n \"acc_norm\": 0.1864951768488746,\n \"acc_norm_stderr\": 0.02212243977248077\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.21604938271604937,\n \"acc_stderr\": 0.022899162918445806,\n \"acc_norm\": 0.21604938271604937,\n \"acc_norm_stderr\": 0.022899162918445806\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.23404255319148937,\n \"acc_stderr\": 0.025257861359432417,\n \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.025257861359432417\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.18382352941176472,\n \"acc_stderr\": 0.023529242185193106,\n \"acc_norm\": 0.18382352941176472,\n \"acc_norm_stderr\": 0.023529242185193106\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.18775510204081633,\n \"acc_stderr\": 0.02500025603954621,\n \"acc_norm\": 0.18775510204081633,\n \"acc_norm_stderr\": 0.02500025603954621\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n \"acc_stderr\": 0.03036049015401465,\n \"acc_norm\": 0.24378109452736318,\n \"acc_norm_stderr\": 0.03036049015401465\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.28313253012048195,\n \"acc_stderr\": 0.03507295431370518,\n \"acc_norm\": 0.28313253012048195,\n \"acc_norm_stderr\": 0.03507295431370518\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3216374269005848,\n \"acc_stderr\": 0.03582529442573122,\n \"acc_norm\": 0.3216374269005848,\n \"acc_norm_stderr\": 0.03582529442573122\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24357405140758873,\n \"mc1_stderr\": 0.015026354824910782,\n \"mc2\": 0.4977872683391919,\n \"mc2_stderr\": 0.016733548639246566\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5177584846093133,\n \"acc_stderr\": 0.014043619596174964\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/mavihsrr/GetCode-slerp", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|arc:challenge|25_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|gsm8k|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hellaswag|10_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T00-21-39.795318.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["**/details_harness|winogrande|5_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-16T00-21-39.795318.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_16T00_21_39.795318", "path": ["results_2024-01-16T00-21-39.795318.parquet"]}, {"split": "latest", "path": ["results_2024-01-16T00-21-39.795318.parquet"]}]}]} | 2024-01-16T00:24:20+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of mavihsrr/GetCode-slerp
Dataset automatically created during the evaluation run of model mavihsrr/GetCode-slerp on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-16T00:21:39.795318(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of mavihsrr/GetCode-slerp\n\n\n\nDataset automatically created during the evaluation run of model mavihsrr/GetCode-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T00:21:39.795318(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mavihsrr/GetCode-slerp\n\n\n\nDataset automatically created during the evaluation run of model mavihsrr/GetCode-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T00:21:39.795318(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
183,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mavihsrr/GetCode-slerp\n\n\n\nDataset automatically created during the evaluation run of model mavihsrr/GetCode-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-16T00:21:39.795318(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
69efeef0beecbbda4065825025ab8ceebd0e7743 |
# Dataset Card for Evaluation run of mlabonne/NeuralBeagle14-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mlabonne__NeuralBeagle14-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-16T01:08:26.815622](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralBeagle14-7B/blob/main/results_2024-01-16T01-08-26.815622.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6515140207831388,
"acc_stderr": 0.03220550517246716,
"acc_norm": 0.6509542922384997,
"acc_norm_stderr": 0.03287465661696305,
"mc1": 0.572827417380661,
"mc1_stderr": 0.017316834410963915,
"mc2": 0.6992620055732494,
"mc2_stderr": 0.015067252053266866
},
"harness|arc:challenge|25": {
"acc": 0.7047781569965871,
"acc_stderr": 0.013329750293382316,
"acc_norm": 0.7295221843003413,
"acc_norm_stderr": 0.012980954547659556
},
"harness|hellaswag|10": {
"acc": 0.7173869747062338,
"acc_stderr": 0.00449349587200011,
"acc_norm": 0.8833897629954193,
"acc_norm_stderr": 0.0032029933469910595
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595853,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595853
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.03738520676119669,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.03738520676119669
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695238,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695238
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.027943219989337142,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.027943219989337142
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247078,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247078
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.0325005368436584,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.0325005368436584
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.04697085136647863,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.04697085136647863
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.02530590624159063,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.02530590624159063
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083525,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.02937661648494563,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.02937661648494563
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.676923076923077,
"acc_stderr": 0.02371088850197057,
"acc_norm": 0.676923076923077,
"acc_norm_stderr": 0.02371088850197057
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.02889774874113115,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.02889774874113115
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.03068473711513536,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.03068473711513536
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8385321100917431,
"acc_stderr": 0.015776239256163248,
"acc_norm": 0.8385321100917431,
"acc_norm_stderr": 0.015776239256163248
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5324074074074074,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.5324074074074074,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.02552472232455334,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.02552472232455334
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.03076935200822914,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.03076935200822914
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.02158649400128137,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.02158649400128137
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526094,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526094
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8301404853128991,
"acc_stderr": 0.013428186370608311,
"acc_norm": 0.8301404853128991,
"acc_norm_stderr": 0.013428186370608311
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.02394851290546835,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.02394851290546835
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.43798882681564244,
"acc_stderr": 0.016593394227564843,
"acc_norm": 0.43798882681564244,
"acc_norm_stderr": 0.016593394227564843
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7058823529411765,
"acc_stderr": 0.026090162504279056,
"acc_norm": 0.7058823529411765,
"acc_norm_stderr": 0.026090162504279056
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.025670259242188936,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.025670259242188936
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7469135802469136,
"acc_stderr": 0.024191808600712995,
"acc_norm": 0.7469135802469136,
"acc_norm_stderr": 0.024191808600712995
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47196870925684486,
"acc_stderr": 0.012750151802922436,
"acc_norm": 0.47196870925684486,
"acc_norm_stderr": 0.012750151802922436
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.028332959514031208,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.028332959514031208
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.019070985589687495,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.019070985589687495
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.572289156626506,
"acc_stderr": 0.038515976837185335,
"acc_norm": 0.572289156626506,
"acc_norm_stderr": 0.038515976837185335
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.02917088550072767,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.02917088550072767
},
"harness|truthfulqa:mc|0": {
"mc1": 0.572827417380661,
"mc1_stderr": 0.017316834410963915,
"mc2": 0.6992620055732494,
"mc2_stderr": 0.015067252053266866
},
"harness|winogrande|5": {
"acc": 0.823993685872139,
"acc_stderr": 0.010703090882320705
},
"harness|gsm8k|5": {
"acc": 0.7028051554207733,
"acc_stderr": 0.012588685966624184
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_mlabonne__NeuralBeagle14-7B | [
"region:us"
] | 2024-01-16T01:10:47+00:00 | {"pretty_name": "Evaluation run of mlabonne/NeuralBeagle14-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mlabonne__NeuralBeagle14-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-16T01:08:26.815622](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralBeagle14-7B/blob/main/results_2024-01-16T01-08-26.815622.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6515140207831388,\n \"acc_stderr\": 0.03220550517246716,\n \"acc_norm\": 0.6509542922384997,\n \"acc_norm_stderr\": 0.03287465661696305,\n \"mc1\": 0.572827417380661,\n \"mc1_stderr\": 0.017316834410963915,\n \"mc2\": 0.6992620055732494,\n \"mc2_stderr\": 0.015067252053266866\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.7047781569965871,\n \"acc_stderr\": 0.013329750293382316,\n \"acc_norm\": 0.7295221843003413,\n \"acc_norm_stderr\": 0.012980954547659556\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7173869747062338,\n \"acc_stderr\": 0.00449349587200011,\n \"acc_norm\": 0.8833897629954193,\n \"acc_norm_stderr\": 0.0032029933469910595\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.03738520676119669,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.03738520676119669\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695238,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695238\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337142,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337142\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247078,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247078\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.04697085136647863,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.04697085136647863\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.02530590624159063,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.02530590624159063\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n \"acc_stderr\": 0.023157879349083525,\n \"acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.023157879349083525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494563,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494563\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.02371088850197057,\n \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.02371088850197057\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.02889774874113115,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.02889774874113115\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.03068473711513536,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.03068473711513536\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8385321100917431,\n \"acc_stderr\": 0.015776239256163248,\n \"acc_norm\": 0.8385321100917431,\n \"acc_norm_stderr\": 0.015776239256163248\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5324074074074074,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.5324074074074074,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.02552472232455334,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.02552472232455334\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.02158649400128137,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.02158649400128137\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.047258156262526094,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.047258156262526094\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n \"acc_stderr\": 0.013428186370608311,\n \"acc_norm\": 0.8301404853128991,\n \"acc_norm_stderr\": 0.013428186370608311\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.02394851290546835,\n \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.02394851290546835\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43798882681564244,\n \"acc_stderr\": 0.016593394227564843,\n \"acc_norm\": 0.43798882681564244,\n \"acc_norm_stderr\": 0.016593394227564843\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.026090162504279056,\n \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.026090162504279056\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n \"acc_stderr\": 0.025670259242188936,\n \"acc_norm\": 0.7138263665594855,\n \"acc_norm_stderr\": 0.025670259242188936\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7469135802469136,\n \"acc_stderr\": 0.024191808600712995,\n \"acc_norm\": 0.7469135802469136,\n \"acc_norm_stderr\": 0.024191808600712995\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47196870925684486,\n \"acc_stderr\": 0.012750151802922436,\n \"acc_norm\": 0.47196870925684486,\n \"acc_norm_stderr\": 0.012750151802922436\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.028332959514031208,\n \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.028332959514031208\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.019070985589687495,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.019070985589687495\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.572827417380661,\n \"mc1_stderr\": 0.017316834410963915,\n \"mc2\": 0.6992620055732494,\n \"mc2_stderr\": 0.015067252053266866\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.823993685872139,\n \"acc_stderr\": 0.010703090882320705\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7028051554207733,\n \"acc_stderr\": 0.012588685966624184\n }\n}\n```", "repo_url": "https://huggingface.co/mlabonne/NeuralBeagle14-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|arc:challenge|25_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|gsm8k|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hellaswag|10_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T01-08-26.815622.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["**/details_harness|winogrande|5_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-16T01-08-26.815622.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_16T01_08_26.815622", "path": ["results_2024-01-16T01-08-26.815622.parquet"]}, {"split": "latest", "path": ["results_2024-01-16T01-08-26.815622.parquet"]}]}]} | 2024-01-16T01:11:09+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of mlabonne/NeuralBeagle14-7B
Dataset automatically created during the evaluation run of model mlabonne/NeuralBeagle14-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-16T01:08:26.815622(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of mlabonne/NeuralBeagle14-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/NeuralBeagle14-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T01:08:26.815622(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mlabonne/NeuralBeagle14-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/NeuralBeagle14-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T01:08:26.815622(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
185,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mlabonne/NeuralBeagle14-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/NeuralBeagle14-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-16T01:08:26.815622(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
5cac8e37fcab235864c0608b746f76d953958982 |
# Can LLMs Follow Simple Rules?
[[code](https://github.com/normster/llm_rules)] [[demo](https://huggingface.co/spaces/normster/llm_rules)] [[website](https://eecs.berkeley.edu/~normanmu/llm_rules)] [[paper](https://arxiv.org/abs/2311.04235)]
This repo contains the test cases for RuLES: Rule-following Language Evaluation Scenarios, a benchmark for evaluating rule-following in language models. Please see our github repo for usage instructions and our paper for more information about the benchmark.
## Abstract
As Large Language Models (LLMs) are deployed with increasing real-world responsibilities, it is important to be able to specify and constrain the behavior of these systems in a reliable manner. Model developers may wish to set explicit rules for the model, such as “do not generate abusive content”, but these may be circumvented by jailbreaking techniques. Evaluating how well LLMs follow developer-provided rules in the face of adversarial inputs typically requires manual review, which slows down monitoring and methods development. To address this issue, we propose Rule-following Language Evaluation Scenarios (RuLES), a programmatic framework for measuring rule-following ability in LLMs. RuLES consists of 15 simple text scenarios in which the model is instructed to obey a set of rules in natural language while interacting with the human user. Each scenario has a concise evaluation program to determine whether the model has broken any rules in a conversation. Through manual exploration of model behavior in our scenarios, we identify 6 categories of attack strategies and collect two suites of test cases: one consisting of unique conversations from manual testing and one that systematically implements strategies from the 6 categories. Across various popular proprietary and open models such as GPT-4 and Llama 2, we find that all models are susceptible to a wide variety of adversarial hand-crafted user inputs, though GPT-4 is the best-performing model. Additionally, we evaluate open models under gradient-based attacks and find significant vulnerabilities. We propose RuLES as a challenging new setting for research into exploring and defending against both manual and automatic attacks on LLMs.

## Citation
```
@article{mu2023rules,
title={Can LLMs Follow Simple Rules?},
author={Norman Mu and Sarah Chen and
Zifan Wang and Sizhe Chen and David Karamardian and
Lulwa Aljeraisy and Basel Alomair and
Dan Hendrycks and David Wagner},
journal={arXiv},
year={2023}
}
``` | normster/RuLES | [
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"safety",
"security",
"arxiv:2311.04235",
"region:us"
] | 2024-01-16T01:12:55+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "tags": ["safety", "security"]} | 2024-01-16T01:24:17+00:00 | [
"2311.04235"
] | [
"en"
] | TAGS
#size_categories-n<1K #language-English #license-apache-2.0 #safety #security #arxiv-2311.04235 #region-us
|
# Can LLMs Follow Simple Rules?
[code] [demo] [website] [paper]
This repo contains the test cases for RuLES: Rule-following Language Evaluation Scenarios, a benchmark for evaluating rule-following in language models. Please see our github repo for usage instructions and our paper for more information about the benchmark.
## Abstract
As Large Language Models (LLMs) are deployed with increasing real-world responsibilities, it is important to be able to specify and constrain the behavior of these systems in a reliable manner. Model developers may wish to set explicit rules for the model, such as “do not generate abusive content”, but these may be circumvented by jailbreaking techniques. Evaluating how well LLMs follow developer-provided rules in the face of adversarial inputs typically requires manual review, which slows down monitoring and methods development. To address this issue, we propose Rule-following Language Evaluation Scenarios (RuLES), a programmatic framework for measuring rule-following ability in LLMs. RuLES consists of 15 simple text scenarios in which the model is instructed to obey a set of rules in natural language while interacting with the human user. Each scenario has a concise evaluation program to determine whether the model has broken any rules in a conversation. Through manual exploration of model behavior in our scenarios, we identify 6 categories of attack strategies and collect two suites of test cases: one consisting of unique conversations from manual testing and one that systematically implements strategies from the 6 categories. Across various popular proprietary and open models such as GPT-4 and Llama 2, we find that all models are susceptible to a wide variety of adversarial hand-crafted user inputs, though GPT-4 is the best-performing model. Additionally, we evaluate open models under gradient-based attacks and find significant vulnerabilities. We propose RuLES as a challenging new setting for research into exploring and defending against both manual and automatic attacks on LLMs.
!Results summary figure
| [
"# Can LLMs Follow Simple Rules?\n\n[code] [demo] [website] [paper]\n\nThis repo contains the test cases for RuLES: Rule-following Language Evaluation Scenarios, a benchmark for evaluating rule-following in language models. Please see our github repo for usage instructions and our paper for more information about the benchmark.",
"## Abstract\n\nAs Large Language Models (LLMs) are deployed with increasing real-world responsibilities, it is important to be able to specify and constrain the behavior of these systems in a reliable manner. Model developers may wish to set explicit rules for the model, such as “do not generate abusive content”, but these may be circumvented by jailbreaking techniques. Evaluating how well LLMs follow developer-provided rules in the face of adversarial inputs typically requires manual review, which slows down monitoring and methods development. To address this issue, we propose Rule-following Language Evaluation Scenarios (RuLES), a programmatic framework for measuring rule-following ability in LLMs. RuLES consists of 15 simple text scenarios in which the model is instructed to obey a set of rules in natural language while interacting with the human user. Each scenario has a concise evaluation program to determine whether the model has broken any rules in a conversation. Through manual exploration of model behavior in our scenarios, we identify 6 categories of attack strategies and collect two suites of test cases: one consisting of unique conversations from manual testing and one that systematically implements strategies from the 6 categories. Across various popular proprietary and open models such as GPT-4 and Llama 2, we find that all models are susceptible to a wide variety of adversarial hand-crafted user inputs, though GPT-4 is the best-performing model. Additionally, we evaluate open models under gradient-based attacks and find significant vulnerabilities. We propose RuLES as a challenging new setting for research into exploring and defending against both manual and automatic attacks on LLMs.\n\n!Results summary figure"
] | [
"TAGS\n#size_categories-n<1K #language-English #license-apache-2.0 #safety #security #arxiv-2311.04235 #region-us \n",
"# Can LLMs Follow Simple Rules?\n\n[code] [demo] [website] [paper]\n\nThis repo contains the test cases for RuLES: Rule-following Language Evaluation Scenarios, a benchmark for evaluating rule-following in language models. Please see our github repo for usage instructions and our paper for more information about the benchmark.",
"## Abstract\n\nAs Large Language Models (LLMs) are deployed with increasing real-world responsibilities, it is important to be able to specify and constrain the behavior of these systems in a reliable manner. Model developers may wish to set explicit rules for the model, such as “do not generate abusive content”, but these may be circumvented by jailbreaking techniques. Evaluating how well LLMs follow developer-provided rules in the face of adversarial inputs typically requires manual review, which slows down monitoring and methods development. To address this issue, we propose Rule-following Language Evaluation Scenarios (RuLES), a programmatic framework for measuring rule-following ability in LLMs. RuLES consists of 15 simple text scenarios in which the model is instructed to obey a set of rules in natural language while interacting with the human user. Each scenario has a concise evaluation program to determine whether the model has broken any rules in a conversation. Through manual exploration of model behavior in our scenarios, we identify 6 categories of attack strategies and collect two suites of test cases: one consisting of unique conversations from manual testing and one that systematically implements strategies from the 6 categories. Across various popular proprietary and open models such as GPT-4 and Llama 2, we find that all models are susceptible to a wide variety of adversarial hand-crafted user inputs, though GPT-4 is the best-performing model. Additionally, we evaluate open models under gradient-based attacks and find significant vulnerabilities. We propose RuLES as a challenging new setting for research into exploring and defending against both manual and automatic attacks on LLMs.\n\n!Results summary figure"
] | [
42,
75,
382
] | [
"passage: TAGS\n#size_categories-n<1K #language-English #license-apache-2.0 #safety #security #arxiv-2311.04235 #region-us \n# Can LLMs Follow Simple Rules?\n\n[code] [demo] [website] [paper]\n\nThis repo contains the test cases for RuLES: Rule-following Language Evaluation Scenarios, a benchmark for evaluating rule-following in language models. Please see our github repo for usage instructions and our paper for more information about the benchmark.## Abstract\n\nAs Large Language Models (LLMs) are deployed with increasing real-world responsibilities, it is important to be able to specify and constrain the behavior of these systems in a reliable manner. Model developers may wish to set explicit rules for the model, such as “do not generate abusive content”, but these may be circumvented by jailbreaking techniques. Evaluating how well LLMs follow developer-provided rules in the face of adversarial inputs typically requires manual review, which slows down monitoring and methods development. To address this issue, we propose Rule-following Language Evaluation Scenarios (RuLES), a programmatic framework for measuring rule-following ability in LLMs. RuLES consists of 15 simple text scenarios in which the model is instructed to obey a set of rules in natural language while interacting with the human user. Each scenario has a concise evaluation program to determine whether the model has broken any rules in a conversation. Through manual exploration of model behavior in our scenarios, we identify 6 categories of attack strategies and collect two suites of test cases: one consisting of unique conversations from manual testing and one that systematically implements strategies from the 6 categories. Across various popular proprietary and open models such as GPT-4 and Llama 2, we find that all models are susceptible to a wide variety of adversarial hand-crafted user inputs, though GPT-4 is the best-performing model. Additionally, we evaluate open models under gradient-based attacks and find significant vulnerabilities. We propose RuLES as a challenging new setting for research into exploring and defending against both manual and automatic attacks on LLMs.\n\n!Results summary figure"
] |
ce9afe4015caafa3f7341964a72bcf4bb630f0d5 |
# Dataset Card for Evaluation run of mlabonne/NeuralDaredevil-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mlabonne__NeuralDaredevil-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-16T01:21:38.357937](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralDaredevil-7B/blob/main/results_2024-01-16T01-21-38.357937.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6564539739507499,
"acc_stderr": 0.032046532668970576,
"acc_norm": 0.6558352422789521,
"acc_norm_stderr": 0.03271651117623881,
"mc1": 0.5152998776009792,
"mc1_stderr": 0.0174953044731879,
"mc2": 0.6685490621837052,
"mc2_stderr": 0.014954458772938018
},
"harness|arc:challenge|25": {
"acc": 0.6749146757679181,
"acc_stderr": 0.013688147309729124,
"acc_norm": 0.6988054607508533,
"acc_norm_stderr": 0.013406741767847638
},
"harness|hellaswag|10": {
"acc": 0.6970722963553077,
"acc_stderr": 0.004585850835623566,
"acc_norm": 0.8762198765186218,
"acc_norm_stderr": 0.003286574812451194
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7433962264150943,
"acc_stderr": 0.026880647889051975,
"acc_norm": 0.7433962264150943,
"acc_norm_stderr": 0.026880647889051975
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.035676037996391706,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.035676037996391706
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816508,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816508
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146267,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146267
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.02548718714785938,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.02548718714785938
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411018,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411018
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782655,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782655
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.03517603540361008,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.03517603540361008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267042,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267042
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919436,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919436
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657262,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657262
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616255,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.03068473711513536,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.03068473711513536
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8550458715596331,
"acc_stderr": 0.015094215699700486,
"acc_norm": 0.8550458715596331,
"acc_norm_stderr": 0.015094215699700486
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.033953227263757976,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.033953227263757976
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.024857478080250454,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.024857478080250454
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.025955020841621115,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.025955020841621115
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.031024411740572213,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.031024411740572213
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.036412970813137296,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.036412970813137296
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.03957835471980979,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.03957835471980979
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8301404853128991,
"acc_stderr": 0.013428186370608303,
"acc_norm": 0.8301404853128991,
"acc_norm_stderr": 0.013428186370608303
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069367,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069367
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.42905027932960893,
"acc_stderr": 0.016553287863116033,
"acc_norm": 0.42905027932960893,
"acc_norm_stderr": 0.016553287863116033
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7287581699346405,
"acc_stderr": 0.02545775669666788,
"acc_norm": 0.7287581699346405,
"acc_norm_stderr": 0.02545775669666788
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.025494259350694912,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.025494259350694912
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7469135802469136,
"acc_stderr": 0.024191808600712992,
"acc_norm": 0.7469135802469136,
"acc_norm_stderr": 0.024191808600712992
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5070921985815603,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.5070921985815603,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46284224250325945,
"acc_stderr": 0.012734923579532069,
"acc_norm": 0.46284224250325945,
"acc_norm_stderr": 0.012734923579532069
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6654411764705882,
"acc_stderr": 0.028661996202335303,
"acc_norm": 0.6654411764705882,
"acc_norm_stderr": 0.028661996202335303
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.01895088677080631,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.01895088677080631
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.746938775510204,
"acc_stderr": 0.027833023871399677,
"acc_norm": 0.746938775510204,
"acc_norm_stderr": 0.027833023871399677
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.847953216374269,
"acc_stderr": 0.027539122889061456,
"acc_norm": 0.847953216374269,
"acc_norm_stderr": 0.027539122889061456
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5152998776009792,
"mc1_stderr": 0.0174953044731879,
"mc2": 0.6685490621837052,
"mc2_stderr": 0.014954458772938018
},
"harness|winogrande|5": {
"acc": 0.8208366219415943,
"acc_stderr": 0.010777949156047986
},
"harness|gsm8k|5": {
"acc": 0.731614859742229,
"acc_stderr": 0.012205702688013673
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_mlabonne__NeuralDaredevil-7B | [
"region:us"
] | 2024-01-16T01:23:58+00:00 | {"pretty_name": "Evaluation run of mlabonne/NeuralDaredevil-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mlabonne__NeuralDaredevil-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-16T01:21:38.357937](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralDaredevil-7B/blob/main/results_2024-01-16T01-21-38.357937.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6564539739507499,\n \"acc_stderr\": 0.032046532668970576,\n \"acc_norm\": 0.6558352422789521,\n \"acc_norm_stderr\": 0.03271651117623881,\n \"mc1\": 0.5152998776009792,\n \"mc1_stderr\": 0.0174953044731879,\n \"mc2\": 0.6685490621837052,\n \"mc2_stderr\": 0.014954458772938018\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6749146757679181,\n \"acc_stderr\": 0.013688147309729124,\n \"acc_norm\": 0.6988054607508533,\n \"acc_norm_stderr\": 0.013406741767847638\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6970722963553077,\n \"acc_stderr\": 0.004585850835623566,\n \"acc_norm\": 0.8762198765186218,\n \"acc_norm_stderr\": 0.003286574812451194\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7433962264150943,\n \"acc_stderr\": 0.026880647889051975,\n \"acc_norm\": 0.7433962264150943,\n \"acc_norm_stderr\": 0.026880647889051975\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816508,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816508\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146267,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146267\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.02548718714785938,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.02548718714785938\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411018,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411018\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.023904914311782655,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.023904914311782655\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361008,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267042,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267042\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919436,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919436\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657262,\n \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657262\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.03068473711513536,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.03068473711513536\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8550458715596331,\n \"acc_stderr\": 0.015094215699700486,\n \"acc_norm\": 0.8550458715596331,\n \"acc_norm_stderr\": 0.015094215699700486\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5462962962962963,\n \"acc_stderr\": 0.033953227263757976,\n \"acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.033953227263757976\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.024857478080250454,\n \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.024857478080250454\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8016877637130801,\n \"acc_stderr\": 0.025955020841621115,\n \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.025955020841621115\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.031024411740572213,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.031024411740572213\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.036412970813137296,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.036412970813137296\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.03957835471980979,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.03957835471980979\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n \"acc_stderr\": 0.013428186370608303,\n \"acc_norm\": 0.8301404853128991,\n \"acc_norm_stderr\": 0.013428186370608303\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069367,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069367\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.42905027932960893,\n \"acc_stderr\": 0.016553287863116033,\n \"acc_norm\": 0.42905027932960893,\n \"acc_norm_stderr\": 0.016553287863116033\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7287581699346405,\n \"acc_stderr\": 0.02545775669666788,\n \"acc_norm\": 0.7287581699346405,\n \"acc_norm_stderr\": 0.02545775669666788\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n \"acc_stderr\": 0.025494259350694912,\n \"acc_norm\": 0.7202572347266881,\n \"acc_norm_stderr\": 0.025494259350694912\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7469135802469136,\n \"acc_stderr\": 0.024191808600712992,\n \"acc_norm\": 0.7469135802469136,\n \"acc_norm_stderr\": 0.024191808600712992\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5070921985815603,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.5070921985815603,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46284224250325945,\n \"acc_stderr\": 0.012734923579532069,\n \"acc_norm\": 0.46284224250325945,\n \"acc_norm_stderr\": 0.012734923579532069\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6654411764705882,\n \"acc_stderr\": 0.028661996202335303,\n \"acc_norm\": 0.6654411764705882,\n \"acc_norm_stderr\": 0.028661996202335303\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6748366013071896,\n \"acc_stderr\": 0.01895088677080631,\n \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.01895088677080631\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.746938775510204,\n \"acc_stderr\": 0.027833023871399677,\n \"acc_norm\": 0.746938775510204,\n \"acc_norm_stderr\": 0.027833023871399677\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.847953216374269,\n \"acc_stderr\": 0.027539122889061456,\n \"acc_norm\": 0.847953216374269,\n \"acc_norm_stderr\": 0.027539122889061456\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5152998776009792,\n \"mc1_stderr\": 0.0174953044731879,\n \"mc2\": 0.6685490621837052,\n \"mc2_stderr\": 0.014954458772938018\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8208366219415943,\n \"acc_stderr\": 0.010777949156047986\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.731614859742229,\n \"acc_stderr\": 0.012205702688013673\n }\n}\n```", "repo_url": "https://huggingface.co/mlabonne/NeuralDaredevil-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|arc:challenge|25_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|gsm8k|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hellaswag|10_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T01-21-38.357937.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["**/details_harness|winogrande|5_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-16T01-21-38.357937.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_16T01_21_38.357937", "path": ["results_2024-01-16T01-21-38.357937.parquet"]}, {"split": "latest", "path": ["results_2024-01-16T01-21-38.357937.parquet"]}]}]} | 2024-01-16T01:24:21+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of mlabonne/NeuralDaredevil-7B
Dataset automatically created during the evaluation run of model mlabonne/NeuralDaredevil-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-16T01:21:38.357937(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of mlabonne/NeuralDaredevil-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/NeuralDaredevil-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T01:21:38.357937(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mlabonne/NeuralDaredevil-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/NeuralDaredevil-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T01:21:38.357937(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
183,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mlabonne/NeuralDaredevil-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/NeuralDaredevil-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-16T01:21:38.357937(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
c44eb025241fda2bf8bbd29e206e253fea99ccb9 | # Dataset Card for "clusters_with_loss_scores"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zhan1993/clusters_with_loss_scores | [
"region:us"
] | 2024-01-16T02:27:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "cluster_name", "dtype": "string"}, {"name": "expert_names", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 8962, "num_examples": 10}], "download_size": 6121, "dataset_size": 8962}} | 2024-01-16T02:32:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "clusters_with_loss_scores"
More Information needed | [
"# Dataset Card for \"clusters_with_loss_scores\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"clusters_with_loss_scores\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"clusters_with_loss_scores\"\n\nMore Information needed"
] |
145a8de15844e0e0c2a9af7699a270f1ff30f4d6 |
# Dataset Card for Evaluation run of fionazhang/mistral-environment-adapter
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [fionazhang/mistral-environment-adapter](https://huggingface.co/fionazhang/mistral-environment-adapter) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fionazhang__mistral-environment-adapter",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-16T02:26:33.647943](https://huggingface.co/datasets/open-llm-leaderboard/details_fionazhang__mistral-environment-adapter/blob/main/results_2024-01-16T02-26-33.647943.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2531394512210773,
"acc_stderr": 0.030786270981746203,
"acc_norm": 0.25454761981417956,
"acc_norm_stderr": 0.031614433082308276,
"mc1": 0.2386780905752754,
"mc1_stderr": 0.014922629695456418,
"mc2": 0.48750889120670576,
"mc2_stderr": 0.016439775525762184
},
"harness|arc:challenge|25": {
"acc": 0.21245733788395904,
"acc_stderr": 0.01195348290658295,
"acc_norm": 0.29180887372013653,
"acc_norm_stderr": 0.013284525292403497
},
"harness|hellaswag|10": {
"acc": 0.25632344154550885,
"acc_stderr": 0.004357101984278612,
"acc_norm": 0.2581159131647082,
"acc_norm_stderr": 0.004367037632204527
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3026315789473684,
"acc_stderr": 0.037385206761196665,
"acc_norm": 0.3026315789473684,
"acc_norm_stderr": 0.037385206761196665
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2188679245283019,
"acc_stderr": 0.02544786382510861,
"acc_norm": 0.2188679245283019,
"acc_norm_stderr": 0.02544786382510861
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.03295304696818318,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.03295304696818318
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237655,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237655
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.20425531914893616,
"acc_stderr": 0.026355158413349424,
"acc_norm": 0.20425531914893616,
"acc_norm_stderr": 0.026355158413349424
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.04049339297748141,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.04049339297748141
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.296551724137931,
"acc_stderr": 0.03806142687309993,
"acc_norm": 0.296551724137931,
"acc_norm_stderr": 0.03806142687309993
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2671957671957672,
"acc_stderr": 0.02278967314577656,
"acc_norm": 0.2671957671957672,
"acc_norm_stderr": 0.02278967314577656
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15079365079365079,
"acc_stderr": 0.03200686497287392,
"acc_norm": 0.15079365079365079,
"acc_norm_stderr": 0.03200686497287392
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.25161290322580643,
"acc_stderr": 0.024685979286239956,
"acc_norm": 0.25161290322580643,
"acc_norm_stderr": 0.024685979286239956
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2955665024630542,
"acc_stderr": 0.032104944337514575,
"acc_norm": 0.2955665024630542,
"acc_norm_stderr": 0.032104944337514575
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.28484848484848485,
"acc_stderr": 0.035243908445117836,
"acc_norm": 0.28484848484848485,
"acc_norm_stderr": 0.035243908445117836
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.25252525252525254,
"acc_stderr": 0.030954055470365897,
"acc_norm": 0.25252525252525254,
"acc_norm_stderr": 0.030954055470365897
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.22797927461139897,
"acc_stderr": 0.030276909945178256,
"acc_norm": 0.22797927461139897,
"acc_norm_stderr": 0.030276909945178256
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2128205128205128,
"acc_stderr": 0.020752423722128013,
"acc_norm": 0.2128205128205128,
"acc_norm_stderr": 0.020752423722128013
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26296296296296295,
"acc_stderr": 0.02684205787383371,
"acc_norm": 0.26296296296296295,
"acc_norm_stderr": 0.02684205787383371
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.271523178807947,
"acc_stderr": 0.03631329803969653,
"acc_norm": 0.271523178807947,
"acc_norm_stderr": 0.03631329803969653
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.22201834862385322,
"acc_stderr": 0.01781884956479663,
"acc_norm": 0.22201834862385322,
"acc_norm_stderr": 0.01781884956479663
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.21296296296296297,
"acc_stderr": 0.027920963147993656,
"acc_norm": 0.21296296296296297,
"acc_norm_stderr": 0.027920963147993656
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25980392156862747,
"acc_stderr": 0.030778554678693264,
"acc_norm": 0.25980392156862747,
"acc_norm_stderr": 0.030778554678693264
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.26582278481012656,
"acc_stderr": 0.028756799629658335,
"acc_norm": 0.26582278481012656,
"acc_norm_stderr": 0.028756799629658335
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.20179372197309417,
"acc_stderr": 0.026936111912802273,
"acc_norm": 0.20179372197309417,
"acc_norm_stderr": 0.026936111912802273
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.22900763358778625,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.22900763358778625,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.371900826446281,
"acc_stderr": 0.044120158066245044,
"acc_norm": 0.371900826446281,
"acc_norm_stderr": 0.044120158066245044
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.23148148148148148,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.23148148148148148,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3006134969325153,
"acc_stderr": 0.03602511318806771,
"acc_norm": 0.3006134969325153,
"acc_norm_stderr": 0.03602511318806771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.24107142857142858,
"acc_stderr": 0.04059867246952687,
"acc_norm": 0.24107142857142858,
"acc_norm_stderr": 0.04059867246952687
},
"harness|hendrycksTest-management|5": {
"acc": 0.1941747572815534,
"acc_stderr": 0.039166677628225836,
"acc_norm": 0.1941747572815534,
"acc_norm_stderr": 0.039166677628225836
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2564102564102564,
"acc_stderr": 0.02860595370200425,
"acc_norm": 0.2564102564102564,
"acc_norm_stderr": 0.02860595370200425
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.2,
"acc_stderr": 0.040201512610368445,
"acc_norm": 0.2,
"acc_norm_stderr": 0.040201512610368445
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2707535121328225,
"acc_stderr": 0.015889888362560486,
"acc_norm": 0.2707535121328225,
"acc_norm_stderr": 0.015889888362560486
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.29190751445086704,
"acc_stderr": 0.02447699407624734,
"acc_norm": 0.29190751445086704,
"acc_norm_stderr": 0.02447699407624734
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.25163398692810457,
"acc_stderr": 0.024848018263875195,
"acc_norm": 0.25163398692810457,
"acc_norm_stderr": 0.024848018263875195
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2990353697749196,
"acc_stderr": 0.026003301117885135,
"acc_norm": 0.2990353697749196,
"acc_norm_stderr": 0.026003301117885135
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2932098765432099,
"acc_stderr": 0.02532988817190092,
"acc_norm": 0.2932098765432099,
"acc_norm_stderr": 0.02532988817190092
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2695035460992908,
"acc_stderr": 0.026469036818590638,
"acc_norm": 0.2695035460992908,
"acc_norm_stderr": 0.026469036818590638
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.27053455019556716,
"acc_stderr": 0.011345996743539264,
"acc_norm": 0.27053455019556716,
"acc_norm_stderr": 0.011345996743539264
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.16544117647058823,
"acc_stderr": 0.022571771025494767,
"acc_norm": 0.16544117647058823,
"acc_norm_stderr": 0.022571771025494767
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2761437908496732,
"acc_stderr": 0.018087276935663137,
"acc_norm": 0.2761437908496732,
"acc_norm_stderr": 0.018087276935663137
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.20909090909090908,
"acc_stderr": 0.038950910157241364,
"acc_norm": 0.20909090909090908,
"acc_norm_stderr": 0.038950910157241364
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.24081632653061225,
"acc_stderr": 0.027372942201788163,
"acc_norm": 0.24081632653061225,
"acc_norm_stderr": 0.027372942201788163
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24875621890547264,
"acc_stderr": 0.030567675938916707,
"acc_norm": 0.24875621890547264,
"acc_norm_stderr": 0.030567675938916707
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.20481927710843373,
"acc_stderr": 0.03141784291663926,
"acc_norm": 0.20481927710843373,
"acc_norm_stderr": 0.03141784291663926
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.29239766081871343,
"acc_stderr": 0.034886477134579215,
"acc_norm": 0.29239766081871343,
"acc_norm_stderr": 0.034886477134579215
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2386780905752754,
"mc1_stderr": 0.014922629695456418,
"mc2": 0.48750889120670576,
"mc2_stderr": 0.016439775525762184
},
"harness|winogrande|5": {
"acc": 0.5043409629044988,
"acc_stderr": 0.014051956064076903
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_fionazhang__mistral-environment-adapter | [
"region:us"
] | 2024-01-16T02:28:51+00:00 | {"pretty_name": "Evaluation run of fionazhang/mistral-environment-adapter", "dataset_summary": "Dataset automatically created during the evaluation run of model [fionazhang/mistral-environment-adapter](https://huggingface.co/fionazhang/mistral-environment-adapter) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fionazhang__mistral-environment-adapter\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-16T02:26:33.647943](https://huggingface.co/datasets/open-llm-leaderboard/details_fionazhang__mistral-environment-adapter/blob/main/results_2024-01-16T02-26-33.647943.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2531394512210773,\n \"acc_stderr\": 0.030786270981746203,\n \"acc_norm\": 0.25454761981417956,\n \"acc_norm_stderr\": 0.031614433082308276,\n \"mc1\": 0.2386780905752754,\n \"mc1_stderr\": 0.014922629695456418,\n \"mc2\": 0.48750889120670576,\n \"mc2_stderr\": 0.016439775525762184\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.21245733788395904,\n \"acc_stderr\": 0.01195348290658295,\n \"acc_norm\": 0.29180887372013653,\n \"acc_norm_stderr\": 0.013284525292403497\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.25632344154550885,\n \"acc_stderr\": 0.004357101984278612,\n \"acc_norm\": 0.2581159131647082,\n \"acc_norm_stderr\": 0.004367037632204527\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.3026315789473684,\n \"acc_stderr\": 0.037385206761196665,\n \"acc_norm\": 0.3026315789473684,\n \"acc_norm_stderr\": 0.037385206761196665\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2188679245283019,\n \"acc_stderr\": 0.02544786382510861,\n \"acc_norm\": 0.2188679245283019,\n \"acc_norm_stderr\": 0.02544786382510861\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.03295304696818318,\n \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.03295304696818318\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237655,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237655\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.20425531914893616,\n \"acc_stderr\": 0.026355158413349424,\n \"acc_norm\": 0.20425531914893616,\n \"acc_norm_stderr\": 0.026355158413349424\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.04049339297748141,\n \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.04049339297748141\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.296551724137931,\n \"acc_stderr\": 0.03806142687309993,\n \"acc_norm\": 0.296551724137931,\n \"acc_norm_stderr\": 0.03806142687309993\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2671957671957672,\n \"acc_stderr\": 0.02278967314577656,\n \"acc_norm\": 0.2671957671957672,\n \"acc_norm_stderr\": 0.02278967314577656\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15079365079365079,\n \"acc_stderr\": 0.03200686497287392,\n \"acc_norm\": 0.15079365079365079,\n \"acc_norm_stderr\": 0.03200686497287392\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.25161290322580643,\n \"acc_stderr\": 0.024685979286239956,\n \"acc_norm\": 0.25161290322580643,\n \"acc_norm_stderr\": 0.024685979286239956\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n \"acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.28484848484848485,\n \"acc_stderr\": 0.035243908445117836,\n \"acc_norm\": 0.28484848484848485,\n \"acc_norm_stderr\": 0.035243908445117836\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.25252525252525254,\n \"acc_stderr\": 0.030954055470365897,\n \"acc_norm\": 0.25252525252525254,\n \"acc_norm_stderr\": 0.030954055470365897\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.22797927461139897,\n \"acc_stderr\": 0.030276909945178256,\n \"acc_norm\": 0.22797927461139897,\n \"acc_norm_stderr\": 0.030276909945178256\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2128205128205128,\n \"acc_stderr\": 0.020752423722128013,\n \"acc_norm\": 0.2128205128205128,\n \"acc_norm_stderr\": 0.020752423722128013\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.02684205787383371,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.02684205787383371\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.271523178807947,\n \"acc_stderr\": 0.03631329803969653,\n \"acc_norm\": 0.271523178807947,\n \"acc_norm_stderr\": 0.03631329803969653\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.22201834862385322,\n \"acc_stderr\": 0.01781884956479663,\n \"acc_norm\": 0.22201834862385322,\n \"acc_norm_stderr\": 0.01781884956479663\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.21296296296296297,\n \"acc_stderr\": 0.027920963147993656,\n \"acc_norm\": 0.21296296296296297,\n \"acc_norm_stderr\": 0.027920963147993656\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25980392156862747,\n \"acc_stderr\": 0.030778554678693264,\n \"acc_norm\": 0.25980392156862747,\n \"acc_norm_stderr\": 0.030778554678693264\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.26582278481012656,\n \"acc_stderr\": 0.028756799629658335,\n \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.028756799629658335\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.20179372197309417,\n \"acc_stderr\": 0.026936111912802273,\n \"acc_norm\": 0.20179372197309417,\n \"acc_norm_stderr\": 0.026936111912802273\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.22900763358778625,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.22900763358778625,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.371900826446281,\n \"acc_stderr\": 0.044120158066245044,\n \"acc_norm\": 0.371900826446281,\n \"acc_norm_stderr\": 0.044120158066245044\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3006134969325153,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.3006134969325153,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.24107142857142858,\n \"acc_stderr\": 0.04059867246952687,\n \"acc_norm\": 0.24107142857142858,\n \"acc_norm_stderr\": 0.04059867246952687\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.1941747572815534,\n \"acc_stderr\": 0.039166677628225836,\n \"acc_norm\": 0.1941747572815534,\n \"acc_norm_stderr\": 0.039166677628225836\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2564102564102564,\n \"acc_stderr\": 0.02860595370200425,\n \"acc_norm\": 0.2564102564102564,\n \"acc_norm_stderr\": 0.02860595370200425\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2707535121328225,\n \"acc_stderr\": 0.015889888362560486,\n \"acc_norm\": 0.2707535121328225,\n \"acc_norm_stderr\": 0.015889888362560486\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.29190751445086704,\n \"acc_stderr\": 0.02447699407624734,\n \"acc_norm\": 0.29190751445086704,\n \"acc_norm_stderr\": 0.02447699407624734\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.25163398692810457,\n \"acc_stderr\": 0.024848018263875195,\n \"acc_norm\": 0.25163398692810457,\n \"acc_norm_stderr\": 0.024848018263875195\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2990353697749196,\n \"acc_stderr\": 0.026003301117885135,\n \"acc_norm\": 0.2990353697749196,\n \"acc_norm_stderr\": 0.026003301117885135\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2932098765432099,\n \"acc_stderr\": 0.02532988817190092,\n \"acc_norm\": 0.2932098765432099,\n \"acc_norm_stderr\": 0.02532988817190092\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2695035460992908,\n \"acc_stderr\": 0.026469036818590638,\n \"acc_norm\": 0.2695035460992908,\n \"acc_norm_stderr\": 0.026469036818590638\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.27053455019556716,\n \"acc_stderr\": 0.011345996743539264,\n \"acc_norm\": 0.27053455019556716,\n \"acc_norm_stderr\": 0.011345996743539264\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.16544117647058823,\n \"acc_stderr\": 0.022571771025494767,\n \"acc_norm\": 0.16544117647058823,\n \"acc_norm_stderr\": 0.022571771025494767\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2761437908496732,\n \"acc_stderr\": 0.018087276935663137,\n \"acc_norm\": 0.2761437908496732,\n \"acc_norm_stderr\": 0.018087276935663137\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.038950910157241364,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.038950910157241364\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.24081632653061225,\n \"acc_stderr\": 0.027372942201788163,\n \"acc_norm\": 0.24081632653061225,\n \"acc_norm_stderr\": 0.027372942201788163\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24875621890547264,\n \"acc_stderr\": 0.030567675938916707,\n \"acc_norm\": 0.24875621890547264,\n \"acc_norm_stderr\": 0.030567675938916707\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.20481927710843373,\n \"acc_stderr\": 0.03141784291663926,\n \"acc_norm\": 0.20481927710843373,\n \"acc_norm_stderr\": 0.03141784291663926\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.29239766081871343,\n \"acc_stderr\": 0.034886477134579215,\n \"acc_norm\": 0.29239766081871343,\n \"acc_norm_stderr\": 0.034886477134579215\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2386780905752754,\n \"mc1_stderr\": 0.014922629695456418,\n \"mc2\": 0.48750889120670576,\n \"mc2_stderr\": 0.016439775525762184\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5043409629044988,\n \"acc_stderr\": 0.014051956064076903\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/fionazhang/mistral-environment-adapter", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|arc:challenge|25_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|gsm8k|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hellaswag|10_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T02-26-33.647943.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["**/details_harness|winogrande|5_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-16T02-26-33.647943.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_16T02_26_33.647943", "path": ["results_2024-01-16T02-26-33.647943.parquet"]}, {"split": "latest", "path": ["results_2024-01-16T02-26-33.647943.parquet"]}]}]} | 2024-01-16T02:29:12+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of fionazhang/mistral-environment-adapter
Dataset automatically created during the evaluation run of model fionazhang/mistral-environment-adapter on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-16T02:26:33.647943(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of fionazhang/mistral-environment-adapter\n\n\n\nDataset automatically created during the evaluation run of model fionazhang/mistral-environment-adapter on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T02:26:33.647943(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of fionazhang/mistral-environment-adapter\n\n\n\nDataset automatically created during the evaluation run of model fionazhang/mistral-environment-adapter on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T02:26:33.647943(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
185,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of fionazhang/mistral-environment-adapter\n\n\n\nDataset automatically created during the evaluation run of model fionazhang/mistral-environment-adapter on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-16T02:26:33.647943(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
7962940f78a0b3c398d6a972b583b3bb0e4e060a | # Dataset Card for "clusters_with_similarity_scores"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zhan1993/clusters_with_similarity_scores | [
"region:us"
] | 2024-01-16T02:33:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "cluster_name", "dtype": "string"}, {"name": "expert_names", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 8952, "num_examples": 10}], "download_size": 6061, "dataset_size": 8952}} | 2024-01-16T02:33:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "clusters_with_similarity_scores"
More Information needed | [
"# Dataset Card for \"clusters_with_similarity_scores\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"clusters_with_similarity_scores\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"clusters_with_similarity_scores\"\n\nMore Information needed"
] |
b3cb9f80cfade3374e73ecea63da8957eb125b3c |
# Dataset Card for "gap"
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/gap-coreference](https://github.com/google-research-datasets/gap-coreference)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns](https://arxiv.org/abs/1810.05201)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Size of downloaded dataset files:** 2.40 MB
- **Size of the generated dataset:** 2.43 MB
- **Total amount of disk used:** 4.83 MB
### Dataset Summary
GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of
(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by
Google AI Language for the evaluation of coreference resolution in practical
applications.
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.40 MB
- **Size of the generated dataset:** 2.43 MB
- **Total amount of disk used:** 4.83 MB
An example of 'validation' looks as follows.
```
{
"A": "aliquam ultrices sagittis",
"A-coref": false,
"A-offset": 208,
"B": "elementum curabitur vitae",
"B-coref": false,
"B-offset": 435,
"ID": "validation-1",
"Pronoun": "condimentum mattis pellentesque",
"Pronoun-offset": 948,
"Text": "Lorem ipsum dolor",
"URL": "sem fringilla ut"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `ID`: a `string` feature.
- `Text`: a `string` feature.
- `Pronoun`: a `string` feature.
- `Pronoun-offset`: a `int32` feature.
- `A`: a `string` feature.
- `A-offset`: a `int32` feature.
- `A-coref`: a `bool` feature.
- `B`: a `string` feature.
- `B-offset`: a `int32` feature.
- `B-coref`: a `bool` feature.
- `URL`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 2000| 454|2000|
### Citation Information
```
@article{webster-etal-2018-mind,
title = "Mind the {GAP}: A Balanced Corpus of Gendered Ambiguous Pronouns",
author = "Webster, Kellie and
Recasens, Marta and
Axelrod, Vera and
Baldridge, Jason",
journal = "Transactions of the Association for Computational Linguistics",
volume = "6",
year = "2018",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q18-1042",
doi = "10.1162/tacl_a_00240",
pages = "605--617",
}
```
### Contributions
Modified from dataset added by [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) | coref-data/gap_raw | [
"license:apache-2.0",
"arxiv:1810.05201",
"region:us"
] | 2024-01-16T02:36:23+00:00 | {"license": "apache-2.0"} | 2024-01-19T00:03:40+00:00 | [
"1810.05201"
] | [] | TAGS
#license-apache-2.0 #arxiv-1810.05201 #region-us
| Dataset Card for "gap"
======================
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper: Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
* Point of Contact: gap-coreference@URL
* Size of downloaded dataset files: 2.40 MB
* Size of the generated dataset: 2.43 MB
* Total amount of disk used: 4.83 MB
### Dataset Summary
GAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of
(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by
Google AI Language for the evaluation of coreference resolution in practical
applications.
Dataset Structure
-----------------
### Data Instances
#### default
* Size of downloaded dataset files: 2.40 MB
* Size of the generated dataset: 2.43 MB
* Total amount of disk used: 4.83 MB
An example of 'validation' looks as follows.
### Data Fields
The data fields are the same among all splits.
#### default
* 'ID': a 'string' feature.
* 'Text': a 'string' feature.
* 'Pronoun': a 'string' feature.
* 'Pronoun-offset': a 'int32' feature.
* 'A': a 'string' feature.
* 'A-offset': a 'int32' feature.
* 'A-coref': a 'bool' feature.
* 'B': a 'string' feature.
* 'B-offset': a 'int32' feature.
* 'B-coref': a 'bool' feature.
* 'URL': a 'string' feature.
### Data Splits
### Contributions
Modified from dataset added by @thomwolf, @patrickvonplaten, @otakumesi, @lewtun
| [
"### Dataset Summary\n\n\nGAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of\n(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by\nGoogle AI Language for the evaluation of coreference resolution in practical\napplications.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### default\n\n\n* Size of downloaded dataset files: 2.40 MB\n* Size of the generated dataset: 2.43 MB\n* Total amount of disk used: 4.83 MB\n\n\nAn example of 'validation' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### default\n\n\n* 'ID': a 'string' feature.\n* 'Text': a 'string' feature.\n* 'Pronoun': a 'string' feature.\n* 'Pronoun-offset': a 'int32' feature.\n* 'A': a 'string' feature.\n* 'A-offset': a 'int32' feature.\n* 'A-coref': a 'bool' feature.\n* 'B': a 'string' feature.\n* 'B-offset': a 'int32' feature.\n* 'B-coref': a 'bool' feature.\n* 'URL': a 'string' feature.",
"### Data Splits",
"### Contributions\n\n\nModified from dataset added by @thomwolf, @patrickvonplaten, @otakumesi, @lewtun"
] | [
"TAGS\n#license-apache-2.0 #arxiv-1810.05201 #region-us \n",
"### Dataset Summary\n\n\nGAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of\n(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by\nGoogle AI Language for the evaluation of coreference resolution in practical\napplications.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### default\n\n\n* Size of downloaded dataset files: 2.40 MB\n* Size of the generated dataset: 2.43 MB\n* Total amount of disk used: 4.83 MB\n\n\nAn example of 'validation' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### default\n\n\n* 'ID': a 'string' feature.\n* 'Text': a 'string' feature.\n* 'Pronoun': a 'string' feature.\n* 'Pronoun-offset': a 'int32' feature.\n* 'A': a 'string' feature.\n* 'A-offset': a 'int32' feature.\n* 'A-coref': a 'bool' feature.\n* 'B': a 'string' feature.\n* 'B-offset': a 'int32' feature.\n* 'B-coref': a 'bool' feature.\n* 'URL': a 'string' feature.",
"### Data Splits",
"### Contributions\n\n\nModified from dataset added by @thomwolf, @patrickvonplaten, @otakumesi, @lewtun"
] | [
23,
68,
6,
50,
17,
146,
5,
32
] | [
"passage: TAGS\n#license-apache-2.0 #arxiv-1810.05201 #region-us \n### Dataset Summary\n\n\nGAP is a gender-balanced dataset containing 8,908 coreference-labeled pairs of\n(ambiguous pronoun, antecedent name), sampled from Wikipedia and released by\nGoogle AI Language for the evaluation of coreference resolution in practical\napplications.\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 2.40 MB\n* Size of the generated dataset: 2.43 MB\n* Total amount of disk used: 4.83 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'ID': a 'string' feature.\n* 'Text': a 'string' feature.\n* 'Pronoun': a 'string' feature.\n* 'Pronoun-offset': a 'int32' feature.\n* 'A': a 'string' feature.\n* 'A-offset': a 'int32' feature.\n* 'A-coref': a 'bool' feature.\n* 'B': a 'string' feature.\n* 'B-offset': a 'int32' feature.\n* 'B-coref': a 'bool' feature.\n* 'URL': a 'string' feature.### Data Splits### Contributions\n\n\nModified from dataset added by @thomwolf, @patrickvonplaten, @otakumesi, @lewtun"
] |
310b53f0cd10c187c4b23d75eddca398f2408ae3 |
# gen_winograd
- Project: https://ufal.mff.cuni.cz/corefud
- Data source: https://github.com/mbzuai-nlp/gen-X/tree/bf1c0adb4b4def03cdf419c18b2948695bc1fab8
## Details
English Winograd generated by GPT-4
## Citation
```
@misc{whitehouse2023llmpowered,
title={LLM-powered Data Augmentation for Enhanced Crosslingual Performance},
author={Chenxi Whitehouse and Monojit Choudhury and Alham Fikri Aji},
year={2023},
eprint={2305.14288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | coref-data/gen_winograd_raw | [
"license:cc-by-nd-4.0",
"arxiv:2305.14288",
"region:us"
] | 2024-01-16T02:36:23+00:00 | {"license": "cc-by-nd-4.0"} | 2024-01-19T00:03:39+00:00 | [
"2305.14288"
] | [] | TAGS
#license-cc-by-nd-4.0 #arxiv-2305.14288 #region-us
|
# gen_winograd
- Project: URL
- Data source: URL
## Details
English Winograd generated by GPT-4
| [
"# gen_winograd\n\n- Project: URL\n- Data source: URL",
"## Details\n\nEnglish Winograd generated by GPT-4"
] | [
"TAGS\n#license-cc-by-nd-4.0 #arxiv-2305.14288 #region-us \n",
"# gen_winograd\n\n- Project: URL\n- Data source: URL",
"## Details\n\nEnglish Winograd generated by GPT-4"
] | [
26,
15,
12
] | [
"passage: TAGS\n#license-cc-by-nd-4.0 #arxiv-2305.14288 #region-us \n# gen_winograd\n\n- Project: URL\n- Data source: URL## Details\n\nEnglish Winograd generated by GPT-4"
] |
21d40813562b0873feec3c5da91109000f8878c7 |  | neon-mao/language-dataset | [
"task_categories:text-classification",
"size_categories:10M<n<100M",
"language:en",
"language:zh",
"language:fr",
"language:ru",
"language:ja",
"language:it",
"language:tr",
"language:de",
"language:pt",
"language:es",
"language:he",
"language:uk",
"language:nl",
"language:fi",
"language:pl",
"language:lt",
"language:cs",
"language:da",
"language:sv",
"language:sr",
"language:ar",
"language:el",
"language:ro",
"language:bg",
"language:vi",
"language:sk",
"language:id",
"language:is",
"language:ko",
"language:ca",
"language:hr",
"language:th",
"language:et",
"language:sl",
"language:no",
"license:mit",
"region:us"
] | 2024-01-16T02:38:01+00:00 | {"language": ["en", "zh", "fr", "ru", "ja", "it", "tr", "de", "pt", "es", "he", "uk", "nl", "fi", "pl", "lt", "cs", "da", "sv", "sr", "ar", "el", "ro", "bg", "vi", "sk", "id", "is", "ko", "ca", "hr", "th", "et", "sl", "no"], "license": "mit", "size_categories": ["10M<n<100M"], "task_categories": ["text-classification"]} | 2024-01-16T03:08:19+00:00 | [] | [
"en",
"zh",
"fr",
"ru",
"ja",
"it",
"tr",
"de",
"pt",
"es",
"he",
"uk",
"nl",
"fi",
"pl",
"lt",
"cs",
"da",
"sv",
"sr",
"ar",
"el",
"ro",
"bg",
"vi",
"sk",
"id",
"is",
"ko",
"ca",
"hr",
"th",
"et",
"sl",
"no"
] | TAGS
#task_categories-text-classification #size_categories-10M<n<100M #language-English #language-Chinese #language-French #language-Russian #language-Japanese #language-Italian #language-Turkish #language-German #language-Portuguese #language-Spanish #language-Hebrew #language-Ukrainian #language-Dutch #language-Finnish #language-Polish #language-Lithuanian #language-Czech #language-Danish #language-Swedish #language-Serbian #language-Arabic #language-Modern Greek (1453-) #language-Romanian #language-Bulgarian #language-Vietnamese #language-Slovak #language-Indonesian #language-Icelandic #language-Korean #language-Catalan #language-Croatian #language-Thai #language-Estonian #language-Slovenian #language-Norwegian #license-mit #region-us
| !筛选分析_图片1.png | [] | [
"TAGS\n#task_categories-text-classification #size_categories-10M<n<100M #language-English #language-Chinese #language-French #language-Russian #language-Japanese #language-Italian #language-Turkish #language-German #language-Portuguese #language-Spanish #language-Hebrew #language-Ukrainian #language-Dutch #language-Finnish #language-Polish #language-Lithuanian #language-Czech #language-Danish #language-Swedish #language-Serbian #language-Arabic #language-Modern Greek (1453-) #language-Romanian #language-Bulgarian #language-Vietnamese #language-Slovak #language-Indonesian #language-Icelandic #language-Korean #language-Catalan #language-Croatian #language-Thai #language-Estonian #language-Slovenian #language-Norwegian #license-mit #region-us \n"
] | [
233
] | [
"passage: TAGS\n#task_categories-text-classification #size_categories-10M<n<100M #language-English #language-Chinese #language-French #language-Russian #language-Japanese #language-Italian #language-Turkish #language-German #language-Portuguese #language-Spanish #language-Hebrew #language-Ukrainian #language-Dutch #language-Finnish #language-Polish #language-Lithuanian #language-Czech #language-Danish #language-Swedish #language-Serbian #language-Arabic #language-Modern Greek (1453-) #language-Romanian #language-Bulgarian #language-Vietnamese #language-Slovak #language-Indonesian #language-Icelandic #language-Korean #language-Catalan #language-Croatian #language-Thai #language-Estonian #language-Slovenian #language-Norwegian #license-mit #region-us \n"
] |
d430d2dfdfea81d4ee1ed14b6ecb03bf6494f151 | # Dataset Card for "indic-pl-bert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rumbleFTW/indic-pl-bert | [
"region:us"
] | 2024-01-16T02:46:33+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "input_ids", "sequence": {"sequence": "int32"}}, {"name": "phonemes", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 288672844, "num_examples": 34475}], "download_size": 74299681, "dataset_size": 288672844}} | 2024-01-16T02:47:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "indic-pl-bert"
More Information needed | [
"# Dataset Card for \"indic-pl-bert\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"indic-pl-bert\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"indic-pl-bert\"\n\nMore Information needed"
] |
16e8d4bc258b2135b61bad586223386f8ba98323 | # Dataset Card for Seamless-Align-Expressive (WIP). Inspired by https://huggingface.co/datasets/allenai/nllb
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset was created based on [metadata](https://github.com/facebookresearch/seamless_communication/blob/main/docs/expressive/seamless_align_expressive_README.md) for mined expressive Speech-to-Speech(S2S) released by Meta AI. The S2S contains data for 5 language pairs. The S2S dataset is ~228GB compressed.
#### How to use the data
There are two ways to access the data:
* Via the Hugging Face Python datasets library
```
Scripts coming soon
```
* Clone the git repo
```
git lfs install
git clone https://huggingface.co/datasets/jhu-clsp/seamless-align-expressive
```
### Supported Tasks and Leaderboards
N/A
### Languages
Language pairs can be found [here](https://github.com/facebookresearch/seamless_communication/blob/main/docs/expressive/seamless_align_expressive_README.md).
## Dataset Structure
Each language pair contains two gzipped files, src.tar.gz and tgt.tar.gz
### Data Instances
| Language Pair | Number of samples |
| :---: | :---: |
| de-en | 1385380 |
| en-es | |
| en-fr | |
| en-it | |
| en-zh | |
### Data Fields
Data Field can be found [here](https://github.com/facebookresearch/seamless_communication/blob/main/docs/m4t/seamless_align_README.md).
### Data Splits
The data is not split.
## Dataset Creation
### Curation Rationale
### Source Data
Inspect links in metadata
#### Who are the source language producers?
Speech was collected from the web many of which are web crawls.
### Annotations
#### Annotation process
Parallel sentences were identified using SONAR Expressive encoders. (Duquenne et al., 2023)
#### Who are the annotators?
The data was not human annotated.
### Personal and Sensitive Information
Data may contain personally identifiable information, sensitive content, or toxic content that was publicly shared on the Internet.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides data for training machine learning systems for many languages.
### Discussion of Biases
Biases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques; lower resource languages generally have lower accuracy.
### Other Known Limitations
Some of the translations are in fact machine translations. While some website machine translation tools are identifiable from HTML source, these tools were not filtered out en mass because raw HTML was not available from some sources and CommonCrawl processing started from WET files.
## Additional Information
### Dataset Curators
The data was not curated.
### Licensing Information
The dataset is released under the terms of [MIT](https://opensource.org/license/mit/). **PLEASE, USE DATA RESPONSIBLY**
### Citation Information
Seamless Communication et al, Seamless: Multilingual Expressive and Streaming Speech Translation. arXiv Seamless: Multilingual Expressive and Streaming Speech Translation, 2023. <br>
Duquenne et al, SONAR EXPRESSIVE: Zero-shot Expressive Speech-to-Speech Translation. https://ai.meta.com/research/publications/sonar-expressive-zero-shot-expressive-speech-to-speech-translation/, 2023
### Contributions
We thank the Seamless Communication Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Loïc Barrault, Yu-An Chung, Mariano Coria Meglioli, David Dale, Ning Dong, Mark Duppenthaler, Paul-Ambroise Duquenne, Brian Ellis, Hady Elsahar, Justin Haaheim, John Hoffman, Min-Jae Hwang, Hirofumi Inaguma, Christopher Klaiber, Ilia Kulikov, Pengwei Li, Daniel Licht, Jean Maillard, Ruslan Mavlyutov, Alice Rakotoarison, Kaushik Ram Sadagopan, Abinesh Ramakrishnan, Tuan Tran, Guillaume Wenzek, Yilin Yang, Ethan Ye, Ivan Evtimov, Pierre Fernandez, Cynthia Gao, Prangthip Hansanti, Elahe Kalbassi, Amanda Kallet, Artyom Kozhevnikov, Gabriel Mejia Gonzalez, Robin San Roman, Christophe Touret, Corinne Wong, Carleigh Wood, Bokai Yu, Pierre Andrews, Can Balioglu, Peng-Jen Chen, Marta R. Costa-jussà, Maha Elbayad, Hongyu Gong, Francisco Guzmán, Kevin Heffernan, Somya Jain, Justine Kao, Ann Lee, Xutai Ma, Alex Mourachko, Benjamin Peloquin, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Anna Sun, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang, Mary Williamson. We also thank the Center for Language and Speech Processing(CLSP) for hosting and releasing this data, including Bismarck Bamfo Odoom and Philipp Koehn (for engineering efforts to host the data, and releasing the huggingface dataset), and Alexandre Mourachko (for organizing the connection).
| jhu-clsp/seamless-align-expressive | [
"license:mit",
"region:us"
] | 2024-01-16T03:02:05+00:00 | {"license": "mit"} | 2024-02-13T01:56:22+00:00 | [] | [] | TAGS
#license-mit #region-us
| Dataset Card for Seamless-Align-Expressive (WIP). Inspired by URL
=================================================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset was created based on metadata for mined expressive Speech-to-Speech(S2S) released by Meta AI. The S2S contains data for 5 language pairs. The S2S dataset is ~228GB compressed.
#### How to use the data
There are two ways to access the data:
* Via the Hugging Face Python datasets library
* Clone the git repo
### Supported Tasks and Leaderboards
N/A
### Languages
Language pairs can be found here.
Dataset Structure
-----------------
Each language pair contains two gzipped files, URL and URL
### Data Instances
### Data Fields
Data Field can be found here.
### Data Splits
The data is not split.
Dataset Creation
----------------
### Curation Rationale
### Source Data
Inspect links in metadata
#### Who are the source language producers?
Speech was collected from the web many of which are web crawls.
### Annotations
#### Annotation process
Parallel sentences were identified using SONAR Expressive encoders. (Duquenne et al., 2023)
#### Who are the annotators?
The data was not human annotated.
### Personal and Sensitive Information
Data may contain personally identifiable information, sensitive content, or toxic content that was publicly shared on the Internet.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
This dataset provides data for training machine learning systems for many languages.
### Discussion of Biases
Biases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques; lower resource languages generally have lower accuracy.
### Other Known Limitations
Some of the translations are in fact machine translations. While some website machine translation tools are identifiable from HTML source, these tools were not filtered out en mass because raw HTML was not available from some sources and CommonCrawl processing started from WET files.
Additional Information
----------------------
### Dataset Curators
The data was not curated.
### Licensing Information
The dataset is released under the terms of MIT. PLEASE, USE DATA RESPONSIBLY
Seamless Communication et al, Seamless: Multilingual Expressive and Streaming Speech Translation. arXiv Seamless: Multilingual Expressive and Streaming Speech Translation, 2023.
Duquenne et al, SONAR EXPRESSIVE: Zero-shot Expressive Speech-to-Speech Translation. URL 2023
### Contributions
We thank the Seamless Communication Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Loïc Barrault, Yu-An Chung, Mariano Coria Meglioli, David Dale, Ning Dong, Mark Duppenthaler, Paul-Ambroise Duquenne, Brian Ellis, Hady Elsahar, Justin Haaheim, John Hoffman, Min-Jae Hwang, Hirofumi Inaguma, Christopher Klaiber, Ilia Kulikov, Pengwei Li, Daniel Licht, Jean Maillard, Ruslan Mavlyutov, Alice Rakotoarison, Kaushik Ram Sadagopan, Abinesh Ramakrishnan, Tuan Tran, Guillaume Wenzek, Yilin Yang, Ethan Ye, Ivan Evtimov, Pierre Fernandez, Cynthia Gao, Prangthip Hansanti, Elahe Kalbassi, Amanda Kallet, Artyom Kozhevnikov, Gabriel Mejia Gonzalez, Robin San Roman, Christophe Touret, Corinne Wong, Carleigh Wood, Bokai Yu, Pierre Andrews, Can Balioglu, Peng-Jen Chen, Marta R. Costa-jussà, Maha Elbayad, Hongyu Gong, Francisco Guzmán, Kevin Heffernan, Somya Jain, Justine Kao, Ann Lee, Xutai Ma, Alex Mourachko, Benjamin Peloquin, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Anna Sun, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang, Mary Williamson. We also thank the Center for Language and Speech Processing(CLSP) for hosting and releasing this data, including Bismarck Bamfo Odoom and Philipp Koehn (for engineering efforts to host the data, and releasing the huggingface dataset), and Alexandre Mourachko (for organizing the connection).
| [
"### Dataset Summary\n\n\nThis dataset was created based on metadata for mined expressive Speech-to-Speech(S2S) released by Meta AI. The S2S contains data for 5 language pairs. The S2S dataset is ~228GB compressed.",
"#### How to use the data\n\n\nThere are two ways to access the data:\n\n\n* Via the Hugging Face Python datasets library\n* Clone the git repo",
"### Supported Tasks and Leaderboards\n\n\nN/A",
"### Languages\n\n\nLanguage pairs can be found here.\n\n\nDataset Structure\n-----------------\n\n\nEach language pair contains two gzipped files, URL and URL",
"### Data Instances",
"### Data Fields\n\n\nData Field can be found here.",
"### Data Splits\n\n\nThe data is not split.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data\n\n\nInspect links in metadata",
"#### Who are the source language producers?\n\n\nSpeech was collected from the web many of which are web crawls.",
"### Annotations",
"#### Annotation process\n\n\nParallel sentences were identified using SONAR Expressive encoders. (Duquenne et al., 2023)",
"#### Who are the annotators?\n\n\nThe data was not human annotated.",
"### Personal and Sensitive Information\n\n\nData may contain personally identifiable information, sensitive content, or toxic content that was publicly shared on the Internet.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThis dataset provides data for training machine learning systems for many languages.",
"### Discussion of Biases\n\n\nBiases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques; lower resource languages generally have lower accuracy.",
"### Other Known Limitations\n\n\nSome of the translations are in fact machine translations. While some website machine translation tools are identifiable from HTML source, these tools were not filtered out en mass because raw HTML was not available from some sources and CommonCrawl processing started from WET files.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe data was not curated.",
"### Licensing Information\n\n\nThe dataset is released under the terms of MIT. PLEASE, USE DATA RESPONSIBLY\n\n\nSeamless Communication et al, Seamless: Multilingual Expressive and Streaming Speech Translation. arXiv Seamless: Multilingual Expressive and Streaming Speech Translation, 2023. \n\nDuquenne et al, SONAR EXPRESSIVE: Zero-shot Expressive Speech-to-Speech Translation. URL 2023",
"### Contributions\n\n\nWe thank the Seamless Communication Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Loïc Barrault, Yu-An Chung, Mariano Coria Meglioli, David Dale, Ning Dong, Mark Duppenthaler, Paul-Ambroise Duquenne, Brian Ellis, Hady Elsahar, Justin Haaheim, John Hoffman, Min-Jae Hwang, Hirofumi Inaguma, Christopher Klaiber, Ilia Kulikov, Pengwei Li, Daniel Licht, Jean Maillard, Ruslan Mavlyutov, Alice Rakotoarison, Kaushik Ram Sadagopan, Abinesh Ramakrishnan, Tuan Tran, Guillaume Wenzek, Yilin Yang, Ethan Ye, Ivan Evtimov, Pierre Fernandez, Cynthia Gao, Prangthip Hansanti, Elahe Kalbassi, Amanda Kallet, Artyom Kozhevnikov, Gabriel Mejia Gonzalez, Robin San Roman, Christophe Touret, Corinne Wong, Carleigh Wood, Bokai Yu, Pierre Andrews, Can Balioglu, Peng-Jen Chen, Marta R. Costa-jussà, Maha Elbayad, Hongyu Gong, Francisco Guzmán, Kevin Heffernan, Somya Jain, Justine Kao, Ann Lee, Xutai Ma, Alex Mourachko, Benjamin Peloquin, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Anna Sun, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang, Mary Williamson. We also thank the Center for Language and Speech Processing(CLSP) for hosting and releasing this data, including Bismarck Bamfo Odoom and Philipp Koehn (for engineering efforts to host the data, and releasing the huggingface dataset), and Alexandre Mourachko (for organizing the connection)."
] | [
"TAGS\n#license-mit #region-us \n",
"### Dataset Summary\n\n\nThis dataset was created based on metadata for mined expressive Speech-to-Speech(S2S) released by Meta AI. The S2S contains data for 5 language pairs. The S2S dataset is ~228GB compressed.",
"#### How to use the data\n\n\nThere are two ways to access the data:\n\n\n* Via the Hugging Face Python datasets library\n* Clone the git repo",
"### Supported Tasks and Leaderboards\n\n\nN/A",
"### Languages\n\n\nLanguage pairs can be found here.\n\n\nDataset Structure\n-----------------\n\n\nEach language pair contains two gzipped files, URL and URL",
"### Data Instances",
"### Data Fields\n\n\nData Field can be found here.",
"### Data Splits\n\n\nThe data is not split.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data\n\n\nInspect links in metadata",
"#### Who are the source language producers?\n\n\nSpeech was collected from the web many of which are web crawls.",
"### Annotations",
"#### Annotation process\n\n\nParallel sentences were identified using SONAR Expressive encoders. (Duquenne et al., 2023)",
"#### Who are the annotators?\n\n\nThe data was not human annotated.",
"### Personal and Sensitive Information\n\n\nData may contain personally identifiable information, sensitive content, or toxic content that was publicly shared on the Internet.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThis dataset provides data for training machine learning systems for many languages.",
"### Discussion of Biases\n\n\nBiases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques; lower resource languages generally have lower accuracy.",
"### Other Known Limitations\n\n\nSome of the translations are in fact machine translations. While some website machine translation tools are identifiable from HTML source, these tools were not filtered out en mass because raw HTML was not available from some sources and CommonCrawl processing started from WET files.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe data was not curated.",
"### Licensing Information\n\n\nThe dataset is released under the terms of MIT. PLEASE, USE DATA RESPONSIBLY\n\n\nSeamless Communication et al, Seamless: Multilingual Expressive and Streaming Speech Translation. arXiv Seamless: Multilingual Expressive and Streaming Speech Translation, 2023. \n\nDuquenne et al, SONAR EXPRESSIVE: Zero-shot Expressive Speech-to-Speech Translation. URL 2023",
"### Contributions\n\n\nWe thank the Seamless Communication Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Loïc Barrault, Yu-An Chung, Mariano Coria Meglioli, David Dale, Ning Dong, Mark Duppenthaler, Paul-Ambroise Duquenne, Brian Ellis, Hady Elsahar, Justin Haaheim, John Hoffman, Min-Jae Hwang, Hirofumi Inaguma, Christopher Klaiber, Ilia Kulikov, Pengwei Li, Daniel Licht, Jean Maillard, Ruslan Mavlyutov, Alice Rakotoarison, Kaushik Ram Sadagopan, Abinesh Ramakrishnan, Tuan Tran, Guillaume Wenzek, Yilin Yang, Ethan Ye, Ivan Evtimov, Pierre Fernandez, Cynthia Gao, Prangthip Hansanti, Elahe Kalbassi, Amanda Kallet, Artyom Kozhevnikov, Gabriel Mejia Gonzalez, Robin San Roman, Christophe Touret, Corinne Wong, Carleigh Wood, Bokai Yu, Pierre Andrews, Can Balioglu, Peng-Jen Chen, Marta R. Costa-jussà, Maha Elbayad, Hongyu Gong, Francisco Guzmán, Kevin Heffernan, Somya Jain, Justine Kao, Ann Lee, Xutai Ma, Alex Mourachko, Benjamin Peloquin, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Anna Sun, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang, Mary Williamson. We also thank the Center for Language and Speech Processing(CLSP) for hosting and releasing this data, including Bismarck Bamfo Odoom and Philipp Koehn (for engineering efforts to host the data, and releasing the huggingface dataset), and Alexandre Mourachko (for organizing the connection)."
] | [
11,
64,
34,
13,
33,
6,
12,
17,
7,
11,
25,
5,
29,
18,
43,
22,
83,
71,
13,
98,
420
] | [
"passage: TAGS\n#license-mit #region-us \n### Dataset Summary\n\n\nThis dataset was created based on metadata for mined expressive Speech-to-Speech(S2S) released by Meta AI. The S2S contains data for 5 language pairs. The S2S dataset is ~228GB compressed.#### How to use the data\n\n\nThere are two ways to access the data:\n\n\n* Via the Hugging Face Python datasets library\n* Clone the git repo### Supported Tasks and Leaderboards\n\n\nN/A### Languages\n\n\nLanguage pairs can be found here.\n\n\nDataset Structure\n-----------------\n\n\nEach language pair contains two gzipped files, URL and URL### Data Instances### Data Fields\n\n\nData Field can be found here.### Data Splits\n\n\nThe data is not split.\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data\n\n\nInspect links in metadata#### Who are the source language producers?\n\n\nSpeech was collected from the web many of which are web crawls.### Annotations#### Annotation process\n\n\nParallel sentences were identified using SONAR Expressive encoders. (Duquenne et al., 2023)#### Who are the annotators?\n\n\nThe data was not human annotated.### Personal and Sensitive Information\n\n\nData may contain personally identifiable information, sensitive content, or toxic content that was publicly shared on the Internet.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nThis dataset provides data for training machine learning systems for many languages.### Discussion of Biases\n\n\nBiases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques; lower resource languages generally have lower accuracy.### Other Known Limitations\n\n\nSome of the translations are in fact machine translations. While some website machine translation tools are identifiable from HTML source, these tools were not filtered out en mass because raw HTML was not available from some sources and CommonCrawl processing started from WET files.\n\n\nAdditional Information\n----------------------",
"passage: ### Dataset Curators\n\n\nThe data was not curated.### Licensing Information\n\n\nThe dataset is released under the terms of MIT. PLEASE, USE DATA RESPONSIBLY\n\n\nSeamless Communication et al, Seamless: Multilingual Expressive and Streaming Speech Translation. arXiv Seamless: Multilingual Expressive and Streaming Speech Translation, 2023. \n\nDuquenne et al, SONAR EXPRESSIVE: Zero-shot Expressive Speech-to-Speech Translation. URL 2023"
] |
b3b86ff18c2e11d9b78d1b99fc69cd45d95c7f75 | # Dataset Card for "cai-conversation-dev1705369037"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | HuggingFaceH4/grok-conversation-harmless-old | [
"region:us"
] | 2024-01-16T03:29:43+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "init_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "init_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "critic_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "critic_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "revision_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "revision_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_sft", "num_bytes": 65938229, "num_examples": 21268}, {"name": "train_prefs", "num_bytes": 66436486, "num_examples": 21269}, {"name": "test_sft", "num_bytes": 3597298, "num_examples": 1156}, {"name": "test_prefs", "num_bytes": 3714045, "num_examples": 1156}], "download_size": 57296534, "dataset_size": 139686058}, "configs": [{"config_name": "default", "data_files": [{"split": "train_sft", "path": "data/train_sft-*"}, {"split": "train_prefs", "path": "data/train_prefs-*"}, {"split": "test_sft", "path": "data/test_sft-*"}, {"split": "test_prefs", "path": "data/test_prefs-*"}]}]} | 2024-01-16T18:24:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "cai-conversation-dev1705369037"
More Information needed | [
"# Dataset Card for \"cai-conversation-dev1705369037\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"cai-conversation-dev1705369037\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"cai-conversation-dev1705369037\"\n\nMore Information needed"
] |
5fd4b2a60f814ff1f343036e24c1b99d93b1c2eb |
This dataset is synthetically generated using ChatGPT 3.5 to contain two-person multi-turn daily conversations with a various of topics (e.g.
travel, food, music, movie/TV, education, hobbies, family, sports, technology, books, etc.) Originally, this dataset is used to train
[QuicktypeGPT](https://github.com/chaoluond/quicktypeGPT/tree/main), which is a GPT model to assist auto complete conversations.
Here is the full list of [topics](https://github.com/chaoluond/quicktypeGPT/blob/main/training_data/topics.txt) the conversation may cover. | safetyllm/daily_conversations | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cdla-sharing-1.0",
"daily-conversation",
"large-language-model",
"conversation-completion",
"region:us"
] | 2024-01-16T03:48:00+00:00 | {"language": ["en"], "license": "cdla-sharing-1.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "tags": ["daily-conversation", "large-language-model", "conversation-completion"]} | 2024-01-21T16:56:37+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cdla-sharing-1.0 #daily-conversation #large-language-model #conversation-completion #region-us
|
This dataset is synthetically generated using ChatGPT 3.5 to contain two-person multi-turn daily conversations with a various of topics (e.g.
travel, food, music, movie/TV, education, hobbies, family, sports, technology, books, etc.) Originally, this dataset is used to train
QuicktypeGPT, which is a GPT model to assist auto complete conversations.
Here is the full list of topics the conversation may cover. | [] | [
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cdla-sharing-1.0 #daily-conversation #large-language-model #conversation-completion #region-us \n"
] | [
65
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cdla-sharing-1.0 #daily-conversation #large-language-model #conversation-completion #region-us \n"
] |
Subsets and Splits