sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
b86b4782e41740c7596d5acb6adec8eac1f14a8c
Alpha version of some form of anime-bench ** This needs more data, if you have anime QA in text format, send it 200 questions, would consider the easy version of anime bench, as its just anime trivia, anime challenge should be deeper into the lore of several anime, maybe even iceberg chart stuff dataset rn: - 25% one piece QA - 25% DBZ QA - 50% Anime Quotes, guess the person who said it not in MCQ format, no random guess baseline, if it knows, it knows QA made by ~forcing~ *kindly requesting* gemini to generate QA pairs from context
VatsaDev/animebench-alpha
[ "language:en", "license:mit", "region:us" ]
2024-02-07T23:30:47+00:00
{"language": ["en"], "license": "mit"}
2024-02-13T15:11:21+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #region-us
Alpha version of some form of anime-bench This needs more data, if you have anime QA in text format, send it 200 questions, would consider the easy version of anime bench, as its just anime trivia, anime challenge should be deeper into the lore of several anime, maybe even iceberg chart stuff dataset rn: - 25% one piece QA - 25% DBZ QA - 50% Anime Quotes, guess the person who said it not in MCQ format, no random guess baseline, if it knows, it knows QA made by ~forcing~ *kindly requesting* gemini to generate QA pairs from context
[]
[ "TAGS\n#language-English #license-mit #region-us \n" ]
669a86e3d1b71f157c37060433f4cd8d564582b5
# Dataset Card for "VNTL-v3.1-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lmg-anon/VNTL-v3.1-1k
[ "region:us" ]
2024-02-07T23:34:24+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "ignore_loss", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 31045416, "num_examples": 12903}, {"name": "val", "num_bytes": 3872937, "num_examples": 1639}], "download_size": 15766667, "dataset_size": 34918353}}
2024-02-07T23:34:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "VNTL-v3.1-1k" More Information needed
[ "# Dataset Card for \"VNTL-v3.1-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"VNTL-v3.1-1k\"\n\nMore Information needed" ]
a14d27956a816dbb441515c1b732465fe5064092
Adapter by: Aisuko Only for researching.
aisuko/quora_duplicate_questions
[ "language:en", "license:mit", "region:us" ]
2024-02-07T23:39:15+00:00
{"language": ["en"], "license": "mit"}
2024-02-07T23:42:21+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #region-us
Adapter by: Aisuko Only for researching.
[]
[ "TAGS\n#language-English #license-mit #region-us \n" ]
352583b3843447ae21886e9ee745906734f4aa2f
# Description This dataset contains csv's from the Lex Fridman podcast transcripts provided by [Whispering-GPT](https://huggingface.co/datasets/Whispering-GPT/lex-fridman-podcast). I split the episode transcripts into parent and child chunks for use with RAG. The parent chunks is size 500 and the child is size 50. The children come with embeddings using OpenAI `text-embedding-3-small` with 1024 dimensionality. # Motivation This was designed for use with [ParentDocumentRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/parent_document_retriever) or [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/node_postprocessor/MetadataReplacementDemo.html). It should provide better retrievals for queries on specific details. Embedding all the chunks took a while so hopefully this saves someone time. The size of the csv's allow for uploading with Supabase "Table Editor" because I didn't want to use postgres COPY.
jaiw/lex_fridman_podcast_embeddings
[ "task_categories:question-answering", "license:apache-2.0", "region:us" ]
2024-02-07T23:42:28+00:00
{"license": "apache-2.0", "task_categories": ["question-answering"]}
2024-02-08T00:04:25+00:00
[]
[]
TAGS #task_categories-question-answering #license-apache-2.0 #region-us
# Description This dataset contains csv's from the Lex Fridman podcast transcripts provided by Whispering-GPT. I split the episode transcripts into parent and child chunks for use with RAG. The parent chunks is size 500 and the child is size 50. The children come with embeddings using OpenAI 'text-embedding-3-small' with 1024 dimensionality. # Motivation This was designed for use with ParentDocumentRetriever or LlamaIndex. It should provide better retrievals for queries on specific details. Embedding all the chunks took a while so hopefully this saves someone time. The size of the csv's allow for uploading with Supabase "Table Editor" because I didn't want to use postgres COPY.
[ "# Description\nThis dataset contains csv's from the Lex Fridman podcast transcripts provided by Whispering-GPT.\nI split the episode transcripts into parent and child chunks for use with RAG.\nThe parent chunks is size 500 and the child is size 50. The children come with embeddings using OpenAI 'text-embedding-3-small' with 1024 dimensionality.", "# Motivation\nThis was designed for use with ParentDocumentRetriever or LlamaIndex.\nIt should provide better retrievals for queries on specific details. Embedding all the chunks took a while so hopefully this saves someone time.\nThe size of the csv's allow for uploading with Supabase \"Table Editor\" because I didn't want to use postgres COPY." ]
[ "TAGS\n#task_categories-question-answering #license-apache-2.0 #region-us \n", "# Description\nThis dataset contains csv's from the Lex Fridman podcast transcripts provided by Whispering-GPT.\nI split the episode transcripts into parent and child chunks for use with RAG.\nThe parent chunks is size 500 and the child is size 50. The children come with embeddings using OpenAI 'text-embedding-3-small' with 1024 dimensionality.", "# Motivation\nThis was designed for use with ParentDocumentRetriever or LlamaIndex.\nIt should provide better retrievals for queries on specific details. Embedding all the chunks took a while so hopefully this saves someone time.\nThe size of the csv's allow for uploading with Supabase \"Table Editor\" because I didn't want to use postgres COPY." ]
d05cac29a492b03790585167aa020a5ce086216d
# Dataset Card for "lakh-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nicholasbien/lakh-dataset
[ "region:us" ]
2024-02-07T23:56:11+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 210856059.9478749, "num_examples": 1995}, {"name": "test", "num_bytes": 52740438.052125104, "num_examples": 499}], "download_size": 94476677, "dataset_size": 263596498.0}}
2024-02-08T01:12:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "lakh-dataset" More Information needed
[ "# Dataset Card for \"lakh-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"lakh-dataset\"\n\nMore Information needed" ]
37afe70f0b1dbcbbdf2216087e027772c309ab9f
# Dataset Card for short_summary ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description 50000 News Articles with corresponding short summary ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of two columns namely Excerpt and Summary. The Excerpt column consists of short text from the news article and the Summary column consists of the few words summary of the excerpt ## Source Data The dataset is scrapped from Otherweb database
valurank/short-summary
[ "task_categories:summarization", "task_ids:news-articles-summarization", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:other", "region:us" ]
2024-02-08T01:14:57+00:00
{"language": ["en"], "license": "other", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "license_name": "valurank", "license_link": "LICENSE"}
2024-02-16T11:16:13+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_ids-news-articles-summarization #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us
# Dataset Card for short_summary ## Table of Contents - Dataset Description - Languages - Dataset Structure - Source Data ## Dataset Description 50000 News Articles with corresponding short summary ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of two columns namely Excerpt and Summary. The Excerpt column consists of short text from the news article and the Summary column consists of the few words summary of the excerpt ## Source Data The dataset is scrapped from Otherweb database
[ "# Dataset Card for short_summary", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data", "## Dataset Description\n\n50000 News Articles with corresponding short summary", "## Languages\n\nThe text in the dataset is in English", "## Dataset Structure\n\nThe dataset consists of two columns namely Excerpt and Summary.\nThe Excerpt column consists of short text from the news article and the Summary column consists of the few words summary of the excerpt", "## Source Data\n\nThe dataset is scrapped from Otherweb database" ]
[ "TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us \n", "# Dataset Card for short_summary", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data", "## Dataset Description\n\n50000 News Articles with corresponding short summary", "## Languages\n\nThe text in the dataset is in English", "## Dataset Structure\n\nThe dataset consists of two columns namely Excerpt and Summary.\nThe Excerpt column consists of short text from the news article and the Summary column consists of the few words summary of the excerpt", "## Source Data\n\nThe dataset is scrapped from Otherweb database" ]
42b63a11ab6de24527be321a64c77336ad4d949c
**Dataset Card for "QuiltVQA_ALL"** <p align="center"> <img src="https://quilt-llava.github.io/static/images/quilt_vqa_samples.png" alt="fig2" width="90%"/> </p> **Human Generated VQA Dataset for Evaluation** [Quilt-VQA](https://quilt-llava.github.io) is generated by extracting Q&A dataset from naturally occurring questions/answers given in educational histopathology videos. With the help of GPT4 and some handcrafted algorithms, we collect a rich evaluation dataset of 1283 Q&A pairs. Top two rows show image-dependent Q&A pairs and bottom two rows show general-knowledge Q&A pairs. The original question posed by the narrator of the video is highlighted in yellow Furthermore, to generate[Quilt-VQA-RED](https://quilt-llava.github.io), we experimented with the visual prompting methodology outlined in Visual Prompting using Red Circle to evaluate models. This involves utilizing the subset of QUILT-VQA with bounding boxes to create ellipses that encapsulate the concepts highlighted by these boxes. <p align="center"> <img src="https://quilt-llava.github.io/static/images/visual_prompting.png" alt="fig2" width="70%"/> </p> **Citation** ```bibtex @article{seyfioglu2023quilt, title={Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos}, author={Seyfioglu, Mehmet Saygin and Ikezogwo, Wisdom O and Ghezloo, Fatemeh and Krishna, Ranjay and Shapiro, Linda}, journal={arXiv preprint arXiv:2312.04746}, year={2023} } ```
wisdomik/QuiltVQA_RED
[ "task_categories:visual-question-answering", "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:mit", "region:us" ]
2024-02-08T01:50:07+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["visual-question-answering", "question-answering"], "pretty_name": "QUILT-VQA-RED", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_type", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78944328, "num_examples": 335}], "download_size": 78327983, "dataset_size": 78944328}, "extra_gated_prompt": "Please read and agree to the following terms: 1. The requester details provided are not faked. 2. The resource will not be used for commercial/clinical purposes and will be used for scientific research only. 3. The data will not be re-distributed, published, copied, or further disseminated in any way or form whatsoever, whether for profit or not. 4. The right study/paper (Quilt-1M(https://quilt1m.github.io/) and Quilt-LLaVa (https://quilt-llava.github.io) papers) will be cited in any publication(s) that uses this model/data ", "extra_gated_fields": {"Email": "text", "First and last name": "text", "Affiliation": "text", "Type of Affiliation": {"type": "select", "options": ["Academia", "Industry", "Other"]}, "I want to use this model for": {"type": "select", "options": ["Research", "Education", {"label": "Other", "value": "other"}]}, "I agree to the aforementioned terms of use": "checkbox"}}
2024-02-14T21:55:03+00:00
[]
[ "en" ]
TAGS #task_categories-visual-question-answering #task_categories-question-answering #size_categories-1K<n<10K #language-English #license-mit #region-us
Dataset Card for "QuiltVQA_ALL" <p align="center"> <img src="URL alt="fig2" width="90%"/> </p> Human Generated VQA Dataset for Evaluation Quilt-VQA is generated by extracting Q&A dataset from naturally occurring questions/answers given in educational histopathology videos. With the help of GPT4 and some handcrafted algorithms, we collect a rich evaluation dataset of 1283 Q&A pairs. Top two rows show image-dependent Q&A pairs and bottom two rows show general-knowledge Q&A pairs. The original question posed by the narrator of the video is highlighted in yellow Furthermore, to generateQuilt-VQA-RED, we experimented with the visual prompting methodology outlined in Visual Prompting using Red Circle to evaluate models. This involves utilizing the subset of QUILT-VQA with bounding boxes to create ellipses that encapsulate the concepts highlighted by these boxes. <p align="center"> <img src="URL alt="fig2" width="70%"/> </p> Citation
[]
[ "TAGS\n#task_categories-visual-question-answering #task_categories-question-answering #size_categories-1K<n<10K #language-English #license-mit #region-us \n" ]
d1b0efa3a436e9101dfbde3752db7607da696c35
# Dataset Card for "A-OKVQA" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceM4/A-OKVQA
[ "region:us" ]
2024-02-08T01:56:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "question_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "choices", "list": "string"}, {"name": "correct_choice_idx", "dtype": "int8"}, {"name": "direct_answers", "dtype": "string"}, {"name": "difficult_direct_answer", "dtype": "bool"}, {"name": "rationales", "list": "string"}], "splits": [{"name": "train", "num_bytes": 929295572.0, "num_examples": 17056}, {"name": "validation", "num_bytes": 60797340.875, "num_examples": 1145}, {"name": "test", "num_bytes": 338535925.25, "num_examples": 6702}], "download_size": 1323807326, "dataset_size": 1328628838.125}}
2024-02-08T01:57:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "A-OKVQA" More Information needed
[ "# Dataset Card for \"A-OKVQA\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"A-OKVQA\"\n\nMore Information needed" ]
35f26bdafa9883b5524f17931216af78632db8eb
# Dataset Card for LibriTTS <!-- Provide a quick summary of the dataset. --> LibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files from Project Gutenberg) of the LibriSpeech corpus. ## Overview This is the LibriTTS dataset, adapted for the `datasets` library. ## Usage ### Splits There are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements): - dev.clean - dev.other - test.clean - test.other - train.clean.100 - train.clean.360 - train.other.500 ### Configurations There are 3 configurations, each which limits the splits the `load_dataset()` function will download. The default configuration is "all". - "dev": only the "dev.clean" split (good for testing the dataset quickly) - "clean": contains only "clean" splits - "other": contains only "other" splits - "all": contains only "all" splits ### Example Loading the `clean` config with only the `train.clean.360` split. ``` load_dataset("blabble-io/libritts", "clean", split="train.clean.100") ``` Streaming is also supported. ``` load_dataset("blabble-io/libritts", streaming=True) ``` ### Columns ``` { "audio": datasets.Audio(sampling_rate=24_000), "text_normalized": datasets.Value("string"), "text_original": datasets.Value("string"), "speaker_id": datasets.Value("string"), "path": datasets.Value("string"), "chapter_id": datasets.Value("string"), "id": datasets.Value("string"), } ``` ### Example Row ``` { 'audio': { 'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS/dev-clean/3081/166546/3081_166546_000028_000002.wav', 'array': ..., 'sampling_rate': 24000 }, 'text_normalized': 'How quickly he disappeared!"', 'text_original': 'How quickly he disappeared!"', 'speaker_id': '3081', 'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS/dev-clean/3081/166546/3081_166546_000028_000002.wav', 'chapter_id': '166546', 'id': '3081_166546_000028_000002' } ``` ## Dataset Details ### Dataset Description - **License:** CC BY 4.0 ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Homepage:** https://www.openslr.org/60/ - **Paper:** https://arxiv.org/abs/1904.02882 ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> ``` @ARTICLE{Zen2019-kz, title = "{LibriTTS}: A corpus derived from {LibriSpeech} for text-to-speech", author = "Zen, Heiga and Dang, Viet and Clark, Rob and Zhang, Yu and Weiss, Ron J and Jia, Ye and Chen, Zhifeng and Wu, Yonghui", abstract = "This paper introduces a new speech corpus called ``LibriTTS'' designed for text-to-speech use. It is derived from the original audio and text materials of the LibriSpeech corpus, which has been used for training and evaluating automatic speech recognition systems. The new corpus inherits desired properties of the LibriSpeech corpus while addressing a number of issues which make LibriSpeech less than ideal for text-to-speech work. The released corpus consists of 585 hours of speech data at 24kHz sampling rate from 2,456 speakers and the corresponding texts. Experimental results show that neural end-to-end TTS models trained from the LibriTTS corpus achieved above 4.0 in mean opinion scores in naturalness in five out of six evaluation speakers. The corpus is freely available for download from http://www.openslr.org/60/.", month = apr, year = 2019, copyright = "http://arxiv.org/licenses/nonexclusive-distrib/1.0/", archivePrefix = "arXiv", primaryClass = "cs.SD", eprint = "1904.02882" } ```
blabble-io/libritts
[ "task_categories:text-to-speech", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "arxiv:1904.02882", "region:us" ]
2024-02-08T02:07:23+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-speech"], "configs": [{"config_name": "dev", "data_files": [{"split": "dev.clean", "path": "data/dev.clean/dev.clean*.parquet"}]}, {"config_name": "clean", "data_files": [{"split": "dev.clean", "path": "data/dev.clean/dev.clean*.parquet"}, {"split": "test.clean", "path": "data/test.clean/test.clean*.parquet"}, {"split": "train.clean.100", "path": "data/train.clean.100/train.clean.100*.parquet"}, {"split": "train.clean.360", "path": "data/train.clean.360/train.clean.360*.parquet"}]}, {"config_name": "other", "data_files": [{"split": "dev.other", "path": "data/dev.other/dev.other*.parquet"}, {"split": "test.other", "path": "data/test.other/test.other*.parquet"}, {"split": "train.other.500", "path": "data/train.other.500/train.other.500*.parquet"}]}, {"config_name": "all", "data_files": [{"split": "dev.clean", "path": "data/dev.clean/dev.clean*.parquet"}, {"split": "dev.other", "path": "data/dev.other/dev.other*.parquet"}, {"split": "test.clean", "path": "data/test.clean/test.clean*.parquet"}, {"split": "test.other", "path": "data/test.other/test.other*.parquet"}, {"split": "train.clean.100", "path": "data/train.clean.100/train.clean.100*.parquet"}, {"split": "train.clean.360", "path": "data/train.clean.360/train.clean.360*.parquet"}, {"split": "train.other.500", "path": "data/train.other.500/train.other.500*.parquet"}]}]}
2024-02-09T21:19:32+00:00
[ "1904.02882" ]
[ "en" ]
TAGS #task_categories-text-to-speech #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-1904.02882 #region-us
# Dataset Card for LibriTTS LibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files from Project Gutenberg) of the LibriSpeech corpus. ## Overview This is the LibriTTS dataset, adapted for the 'datasets' library. ## Usage ### Splits There are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements): - URL - URL - URL - URL - URL.100 - URL.360 - URL.500 ### Configurations There are 3 configurations, each which limits the splits the 'load_dataset()' function will download. The default configuration is "all". - "dev": only the "URL" split (good for testing the dataset quickly) - "clean": contains only "clean" splits - "other": contains only "other" splits - "all": contains only "all" splits ### Example Loading the 'clean' config with only the 'URL.360' split. Streaming is also supported. ### Columns ### Example Row ## Dataset Details ### Dataset Description - License: CC BY 4.0 ### Dataset Sources [optional] - Homepage: URL - Paper: URL
[ "# Dataset Card for LibriTTS\n\n\n\nLibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, \nprepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is \ndesigned for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files \nfrom Project Gutenberg) of the LibriSpeech corpus.", "## Overview\n\nThis is the LibriTTS dataset, adapted for the 'datasets' library.", "## Usage", "### Splits\n\nThere are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements):\n\n- URL\n- URL\n- URL\n- URL\n- URL.100\n- URL.360\n- URL.500", "### Configurations\n\nThere are 3 configurations, each which limits the splits the 'load_dataset()' function will download.\n\nThe default configuration is \"all\".\n\n- \"dev\": only the \"URL\" split (good for testing the dataset quickly)\n- \"clean\": contains only \"clean\" splits\n- \"other\": contains only \"other\" splits\n- \"all\": contains only \"all\" splits", "### Example\n\nLoading the 'clean' config with only the 'URL.360' split.\n\n\nStreaming is also supported.", "### Columns", "### Example Row", "## Dataset Details", "### Dataset Description\n\n- License: CC BY 4.0", "### Dataset Sources [optional]\n\n\n\n- Homepage: URL\n- Paper: URL" ]
[ "TAGS\n#task_categories-text-to-speech #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-1904.02882 #region-us \n", "# Dataset Card for LibriTTS\n\n\n\nLibriTTS is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, \nprepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is \ndesigned for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files \nfrom Project Gutenberg) of the LibriSpeech corpus.", "## Overview\n\nThis is the LibriTTS dataset, adapted for the 'datasets' library.", "## Usage", "### Splits\n\nThere are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements):\n\n- URL\n- URL\n- URL\n- URL\n- URL.100\n- URL.360\n- URL.500", "### Configurations\n\nThere are 3 configurations, each which limits the splits the 'load_dataset()' function will download.\n\nThe default configuration is \"all\".\n\n- \"dev\": only the \"URL\" split (good for testing the dataset quickly)\n- \"clean\": contains only \"clean\" splits\n- \"other\": contains only \"other\" splits\n- \"all\": contains only \"all\" splits", "### Example\n\nLoading the 'clean' config with only the 'URL.360' split.\n\n\nStreaming is also supported.", "### Columns", "### Example Row", "## Dataset Details", "### Dataset Description\n\n- License: CC BY 4.0", "### Dataset Sources [optional]\n\n\n\n- Homepage: URL\n- Paper: URL" ]
44df6e535b84202d7c955373ececc74d9ffff822
# Canadian Case Law Summaries A database of (currently, still growing) >600 case law summaries generated by GPT 4 for random case law in Ontario or Canada
simmo/CanlIICaseSummaries
[ "task_categories:summarization", "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:apache-2.0", "legal", "region:us" ]
2024-02-08T02:32:08+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["summarization", "text-generation"], "tags": ["legal"]}
2024-02-08T02:37:06+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text-generation #size_categories-n<1K #language-English #license-apache-2.0 #legal #region-us
# Canadian Case Law Summaries A database of (currently, still growing) >600 case law summaries generated by GPT 4 for random case law in Ontario or Canada
[ "# Canadian Case Law Summaries\nA database of (currently, still growing) >600 case law summaries generated by GPT 4 for random case law in Ontario or Canada" ]
[ "TAGS\n#task_categories-summarization #task_categories-text-generation #size_categories-n<1K #language-English #license-apache-2.0 #legal #region-us \n", "# Canadian Case Law Summaries\nA database of (currently, still growing) >600 case law summaries generated by GPT 4 for random case law in Ontario or Canada" ]
2e2a9916c2cdd01f3d9360bbdea0ff0fe878e6f0
# Dataset Card for Dataset Name a copy of huggingface samsum This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] Le Minh Long Nguyen ## Dataset Card Contact [More Information Needed]
longAtSJSU/FirstData
[ "size_categories:1K<n<10K", "license:llama2", "region:us" ]
2024-02-08T02:40:49+00:00
{"license": "llama2", "size_categories": ["1K<n<10K"]}
2024-02-14T02:24:41+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-llama2 #region-us
# Dataset Card for Dataset Name a copy of huggingface samsum This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] Le Minh Long Nguyen ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\na copy of huggingface samsum\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]\n\nLe Minh Long Nguyen", "## Dataset Card Contact" ]
[ "TAGS\n#size_categories-1K<n<10K #license-llama2 #region-us \n", "# Dataset Card for Dataset Name\n\na copy of huggingface samsum\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]\n\nLe Minh Long Nguyen", "## Dataset Card Contact" ]
64ec7c3c48193105fd73ddf1d59ca8e92c97b727
# Dataset Card for "original_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
YY-SemEval2024-task7/original_train
[ "region:us" ]
2024-02-08T04:12:49+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "QQA", "num_bytes": 124939, "num_examples": 564}, {"name": "AWPNLI", "num_bytes": 119443.79501385041, "num_examples": 577}, {"name": "NewsNLI", "num_bytes": 212995.52479338844, "num_examples": 774}, {"name": "RedditNLI", "num_bytes": 44651.2, "num_examples": 200}, {"name": "RTE_Quant", "num_bytes": 43032.79518072289, "num_examples": 132}, {"name": "QNLI_Stress", "num_bytes": 1772665, "num_examples": 6475}], "download_size": 0, "dataset_size": 2317727.3149879617}}
2024-02-08T09:01:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "original_train" More Information needed
[ "# Dataset Card for \"original_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"original_train\"\n\nMore Information needed" ]
e6698c24dc46fcedaf9893589d4fb82c202cb372
# Dataset Card for "original_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
YY-SemEval2024-task7/original_test
[ "region:us" ]
2024-02-08T04:15:00+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "QQA", "num_bytes": 34581, "num_examples": 162}, {"name": "AWPNLI", "num_bytes": 30016.204986149583, "num_examples": 145}, {"name": "NewsNLI", "num_bytes": 53386.47520661157, "num_examples": 194}, {"name": "RedditNLI", "num_bytes": 11162.8, "num_examples": 50}, {"name": "RTE_Quant", "num_bytes": 11084.204819277109, "num_examples": 34}, {"name": "QNLI_Stress", "num_bytes": 478574, "num_examples": 1691}], "download_size": 0, "dataset_size": 618804.6850120383}}
2024-02-08T09:01:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "original_test" More Information needed
[ "# Dataset Card for \"original_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"original_test\"\n\nMore Information needed" ]
f8ba2c1956b8aeac759f7fed373bf095ae68ac46
# danbooru-tags-2023 A dataset of danbooru tags. ## Dataset information Generated using [danbooru](https://danbooru.donmai.us/) and [safebooru](https://safebooru.donmai.us/) API. The dataset was created with the following conditions: |Subset name|`all`|`safe`| |-|-|-| |API Endpoint|https://danbooru.donmai.us|https://safebooru.donmai.us| |Date|`2005-01-01..2023-12-31`|`2005-01-01..2023-12-31`| |Score|`>0`|`>0`| |Rating|`g,s,q,e`|`g`| |Filetype|`png,jpg,webp`|`png,jpg,webp`| |Size (number of rows)|6,574,149|1,387,371| ## Usage ``` pip install datasets ``` ```py from datasets import load_dataset dataset = load_dataset( "isek-ai/danbooru-tags-2023", "safe", # or "all" split="train", ) print(dataset) print(dataset[0]) # Dataset({ # features: ['id', 'copyright', 'character', 'artist', 'general', 'meta', 'rating', 'score', 'created_at'], # num_rows: 1387371 # }) # {'id': 12219, # 'copyright': 'fate/stay night, fate/unlimited blade works, fate (series)', # 'character': 'emiya shirou, gilgamesh (fate), gilgamesh (immoral biker jacket) (fate)', # 'artist': 'takeuchi takashi', # 'general': '2boys, alternate costume, alternate hairstyle, battle, blonde hair, brown hair, clenched teeth, duel, dutch angle, field of blades, jacket, long sleeves, male focus, multiple boys, official alternate costume, open clothes, open jacket, open mouth, orange hair, pants, planted, planted sword, raglan sleeves, red eyes, sky, slashing, sword, teeth, unlimited blade works (fate), wasteland, weapon', # 'meta': 'game cg', # 'rating': 'g', # 'score': 33, # 'created_at': '2005-10-15T08:50:32.000+09:00'} ```
isek-ai/danbooru-tags-2023
[ "task_categories:text-classification", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:1M<n<10M", "language:en", "license:cc0-1.0", "region:us" ]
2024-02-08T04:59:05+00:00
{"language": ["en"], "license": "cc0-1.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-classification", "text-generation", "text2text-generation"], "dataset_info": [{"config_name": "all", "features": [{"name": "id", "dtype": "int64"}, {"name": "copyright", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "artist", "dtype": "string"}, {"name": "general", "dtype": "string"}, {"name": "meta", "dtype": "string"}, {"name": "rating", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3265428405, "num_examples": 6574149}], "download_size": 1289260187, "dataset_size": 3265428405}, {"config_name": "safe", "features": [{"name": "id", "dtype": "int64"}, {"name": "copyright", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "artist", "dtype": "string"}, {"name": "general", "dtype": "string"}, {"name": "meta", "dtype": "string"}, {"name": "rating", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 689117431.2710671, "num_examples": 1387371}], "download_size": 276644226, "dataset_size": 689117431.2710671}], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": "all/train-*"}]}, {"config_name": "safe", "data_files": [{"split": "train", "path": "safe/train-*"}]}]}
2024-02-08T05:04:02+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-1M<n<10M #language-English #license-cc0-1.0 #region-us
danbooru-tags-2023 ================== A dataset of danbooru tags. Dataset information ------------------- Generated using danbooru and safebooru API. The dataset was created with the following conditions: Subset name: API Endpoint, 'all': URL, 'safe': URL Subset name: Date, 'all': '2005-01-01..2023-12-31', 'safe': '2005-01-01..2023-12-31' Subset name: Score, 'all': '>0', 'safe': '>0' Subset name: Rating, 'all': 'g,s,q,e', 'safe': 'g' Subset name: Filetype, 'all': 'png,jpg,webp', 'safe': 'png,jpg,webp' Subset name: Size (number of rows), 'all': 6,574,149, 'safe': 1,387,371 Usage -----
[]
[ "TAGS\n#task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-1M<n<10M #language-English #license-cc0-1.0 #region-us \n" ]
9a0300a6fa1a8d1eb7aae084b6b7f59784c02668
Only for researching usage. The papers download from the https://sbert.net/datasets/emnlp2016-2018.json
aisuko/emnlp2016_2018
[ "language:en", "license:apache-2.0", "region:us" ]
2024-02-08T05:00:30+00:00
{"language": ["en"], "license": "apache-2.0"}
2024-02-08T05:04:17+00:00
[]
[ "en" ]
TAGS #language-English #license-apache-2.0 #region-us
Only for researching usage. The papers download from the URL
[]
[ "TAGS\n#language-English #license-apache-2.0 #region-us \n" ]
a12de89b8aeb4f0c4f9302383c6edc82238ca72b
# Overview This is a new curated subset of the SlimOpenOrca data. # Citation ```bibtex @misc{TinyOrca, title = {TinyOrca: An Open Dataset of GPT-4 Augmented FLAN Reasoning Traces, with Verification}, author = {Prince Canuma}, year = {2024}, publisher = {HuggingFace}, url = {https://https://huggingface.co/prince-canuma/TinyOrca} } ``` ```bibtex @misc{SlimOrca, title = {SlimOrca: An Open Dataset of GPT-4 Augmented FLAN Reasoning Traces, with Verification}, author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, url = {https://https://huggingface.co/Open-Orca/SlimOrca} } ``` ```bibtex @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
prince-canuma/TinyOrca
[ "task_categories:conversational", "task_categories:text-classification", "task_categories:token-classification", "task_categories:table-question-answering", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:summarization", "task_categories:feature-extraction", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:n<1K", "language:en", "license:mit", "arxiv:2306.02707", "arxiv:2301.13688", "region:us" ]
2024-02-08T06:21:35+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["conversational", "text-classification", "token-classification", "table-question-answering", "question-answering", "zero-shot-classification", "summarization", "feature-extraction", "text-generation", "text2text-generation"], "pretty_name": "TinyOrca", "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1637511.0745119185, "num_examples": 1000}], "download_size": 838785, "dataset_size": 1637511.0745119185}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-08T06:33:16+00:00
[ "2306.02707", "2301.13688" ]
[ "en" ]
TAGS #task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-n<1K #language-English #license-mit #arxiv-2306.02707 #arxiv-2301.13688 #region-us
# Overview This is a new curated subset of the SlimOpenOrca data.
[ "# Overview\n\nThis is a new curated subset of the SlimOpenOrca data." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-n<1K #language-English #license-mit #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n", "# Overview\n\nThis is a new curated subset of the SlimOpenOrca data." ]
7a2caec553658f47088adbbd87485d0cc6477bbd
hi
tworimpa/testdataset
[ "region:us" ]
2024-02-08T06:30:06+00:00
{}
2024-02-08T06:32:28+00:00
[]
[]
TAGS #region-us
hi
[]
[ "TAGS\n#region-us \n" ]
d68b8dce76680311b391f8c6e8968228e0ac2ab7
This dataset was create semi-synthetically using a RAG system containing Composting and Regenerative Agriculture texts sourced from domain experts and public extension office data, connected to a ChatGPT4 API, put together by Copyleft Cultivars Nonprofit, then cleaned lightly by Caleb DeLeeuw (@Solshine on Hugging Face.) The dataset is in json.
CopyleftCultivars/SemiSynthetic_Composting_Knowledge_For_Agriculture
[ "license:other", "region:us" ]
2024-02-08T06:37:24+00:00
{"license": "other", "license_name": "hl3-cl-eco-extr", "license_link": "https://firstdonoharm.dev/version/3/0/cl-eco-extr.html"}
2024-02-08T07:14:28+00:00
[]
[]
TAGS #license-other #region-us
This dataset was create semi-synthetically using a RAG system containing Composting and Regenerative Agriculture texts sourced from domain experts and public extension office data, connected to a ChatGPT4 API, put together by Copyleft Cultivars Nonprofit, then cleaned lightly by Caleb DeLeeuw (@Solshine on Hugging Face.) The dataset is in json.
[]
[ "TAGS\n#license-other #region-us \n" ]
7c582ff1b8f19405013de36e8560336aedcc3853
Dataset for Agricultural/Farming methods which increase fertility. Dataset contains scenarios and action suggestionss, with intended outcomes. The scenarios are puzzling conundrums on a farm or garden and the actions are informed by Regenerative Agriculture and Natural Farming principles and practices. Regarding Regenerative Farming practices, and Regenerative Farming. "What is Regenerative Agriculture? Regenerative agriculture takes a systems-based, holistic look at the land being stewarded and applies various principles with the goal of making the land more productive and biodiverse over time. In most situations, improving soil health and function is the key to improving productivity and biodiversity. One of the key components of healthy soil is organic matter, which is anything that is alive or was once living, such as a plant root, an earthworm, or a microbe. " -Kiss The Ground Documentary This curated dataset was create semi-synthetically using a RAG system containing regenerative agriculture data for various plants, sourced from agricultural college public data and extension offices' public data, along with open nutrient projects data, connected to a ChatGPT4 API, put together by Copyleft Cultivars Nonprofit, then cleaned lightly by Caleb DeLeeuw (@Solshine on Hugging Face.) This dataset was created and curated in coordination with domain experts in Regenerative Farming and Natural Farming. The dataset is in json.
Solshine/SemiSynthetic_Data_For_Regenerative_Farming_Agriculture
[ "license:mit", "region:us" ]
2024-02-08T07:25:12+00:00
{"license": "mit"}
2024-02-09T06:01:28+00:00
[]
[]
TAGS #license-mit #region-us
Dataset for Agricultural/Farming methods which increase fertility. Dataset contains scenarios and action suggestionss, with intended outcomes. The scenarios are puzzling conundrums on a farm or garden and the actions are informed by Regenerative Agriculture and Natural Farming principles and practices. Regarding Regenerative Farming practices, and Regenerative Farming. "What is Regenerative Agriculture? Regenerative agriculture takes a systems-based, holistic look at the land being stewarded and applies various principles with the goal of making the land more productive and biodiverse over time. In most situations, improving soil health and function is the key to improving productivity and biodiversity. One of the key components of healthy soil is organic matter, which is anything that is alive or was once living, such as a plant root, an earthworm, or a microbe. " -Kiss The Ground Documentary This curated dataset was create semi-synthetically using a RAG system containing regenerative agriculture data for various plants, sourced from agricultural college public data and extension offices' public data, along with open nutrient projects data, connected to a ChatGPT4 API, put together by Copyleft Cultivars Nonprofit, then cleaned lightly by Caleb DeLeeuw (@Solshine on Hugging Face.) This dataset was created and curated in coordination with domain experts in Regenerative Farming and Natural Farming. The dataset is in json.
[]
[ "TAGS\n#license-mit #region-us \n" ]
95a8a1e97be1711bdee4b0bf555d6c106b505d9a
<div align="center"> <h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1> <em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em> </div> <div style="margin-bottom: 2em"></div> <div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;"> <div><a href="https://arxiv.org/abs/2402.05930">📄Paper</a></div> <div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div> <div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div> <div><a href="https://github.com/McGill-NLP/WebLINX">💾Code</a></div> <div><a href="https://twitter.com/sivareddyg/status/1755799365031965140">🐦Tweets</a></div> <div><a href="https://huggingface.co/collections/McGill-NLP/weblinx-models-65c57d4afeeb282d1dcf8434">🤖Models</a></div> </div> <video width="100%" controls autoplay muted loop> <source src="https://huggingface.co/datasets/McGill-NLP/WebLINX/resolve/main/WeblinxWebsiteDemo.mp4?download=false" type="video/mp4"> Your browser does not support the video tag. </video> ## Quickstart To get started, simply install `datasets` with `pip install datasets` and load the chat data splits: ```python from datasets import load_dataset from huggingface_hub import snapshot_download valid = load_dataset("McGill-NLP/weblinx", split="validation") snapshot_download( "McGill-NLP/WebLINX", repo_type="dataset", allow_patterns="templates/llama.txt", local_dir="./" ) with open('templates/llama.txt') as f: template = f.read() turn = valid[0] turn_text = template.format(**turn) ``` You can now use `turn_text` as an input to LLaMA-style models. For example, you can use Sheared-LLaMA: ```python from transformers import pipeline action_model = pipeline( model="McGill-NLP/Sheared-LLaMA-2.7B-weblinx", device=0, torch_dtype='auto' ) out = action_model(turn_text, return_full_text=False, max_new_tokens=64, truncation=True) pred = out[0]['generated_text'] print("Ref:", turn["action"]) print("Pred:", pred) ``` ## Raw Data To use the raw data, you will need to use the `huggingface_hub`: ```python from huggingface_hub import snapshot_download snapshot_download(repo_id="McGill-NLP/WebLINX-full", repo_type="dataset", local_dir="./data/weblinx") ``` For more information on how to use this data using our [official library](https://github.com/McGill-NLP/WebLINX), please refer to the [WebLINX documentation](https://mcgill-nlp.github.io/weblinx/docs). ## Citation If you use our dataset, please cite our work as follows: ```bibtex @misc{lù2024weblinx, title={WebLINX: Real-World Website Navigation with Multi-Turn Dialogue}, author={Xing Han Lù and Zdeněk Kasner and Siva Reddy}, year={2024}, eprint={2402.05930}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
McGill-NLP/WebLINX
[ "task_categories:image-to-text", "task_categories:text-generation", "task_categories:conversational", "task_categories:text2text-generation", "task_categories:sentence-similarity", "size_categories:10K<n<100K", "language:en", "conversational", "image-to-text", "vision", "convAI", "arxiv:2402.05930", "region:us" ]
2024-02-08T08:04:29+00:00
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["image-to-text", "text-generation", "conversational", "text2text-generation", "sentence-similarity"], "pretty_name": "weblinx", "config_names": ["chat"], "configs": [{"config_name": "chat", "default": true, "data_files": [{"split": "train", "path": "data/train.csv"}, {"split": "validation", "path": "data/valid.csv"}, {"split": "test", "path": "data/test_iid.csv"}, {"split": "test_geo", "path": "data/test_geo.csv"}, {"split": "test_vis", "path": "data/test_vis.csv"}, {"split": "test_cat", "path": "data/test_cat.csv"}, {"split": "test_web", "path": "data/test_web.csv"}]}], "tags": ["conversational", "image-to-text", "vision", "convAI"]}
2024-02-16T20:44:04+00:00
[ "2402.05930" ]
[ "en" ]
TAGS #task_categories-image-to-text #task_categories-text-generation #task_categories-conversational #task_categories-text2text-generation #task_categories-sentence-similarity #size_categories-10K<n<100K #language-English #conversational #image-to-text #vision #convAI #arxiv-2402.05930 #region-us
<div align="center"> <h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1> <em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em> </div> <div style="margin-bottom: 2em"></div> <div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;"> <div><a href="URL <div><a href="URL <div><a href="URL <div><a href="URL <div><a href="URL <div><a href="URL </div> <video width="100%" controls autoplay muted loop> <source src="URL type="video/mp4"> Your browser does not support the video tag. </video> ## Quickstart To get started, simply install 'datasets' with 'pip install datasets' and load the chat data splits: You can now use 'turn_text' as an input to LLaMA-style models. For example, you can use Sheared-LLaMA: ## Raw Data To use the raw data, you will need to use the 'huggingface_hub': For more information on how to use this data using our official library, please refer to the WebLINX documentation. If you use our dataset, please cite our work as follows:
[ "## Quickstart\n\nTo get started, simply install 'datasets' with 'pip install datasets' and load the chat data splits:\n\n\n\nYou can now use 'turn_text' as an input to LLaMA-style models. For example, you can use Sheared-LLaMA:", "## Raw Data\n\nTo use the raw data, you will need to use the 'huggingface_hub':\n\n\n\nFor more information on how to use this data using our official library, please refer to the WebLINX documentation.\n\nIf you use our dataset, please cite our work as follows:" ]
[ "TAGS\n#task_categories-image-to-text #task_categories-text-generation #task_categories-conversational #task_categories-text2text-generation #task_categories-sentence-similarity #size_categories-10K<n<100K #language-English #conversational #image-to-text #vision #convAI #arxiv-2402.05930 #region-us \n", "## Quickstart\n\nTo get started, simply install 'datasets' with 'pip install datasets' and load the chat data splits:\n\n\n\nYou can now use 'turn_text' as an input to LLaMA-style models. For example, you can use Sheared-LLaMA:", "## Raw Data\n\nTo use the raw data, you will need to use the 'huggingface_hub':\n\n\n\nFor more information on how to use this data using our official library, please refer to the WebLINX documentation.\n\nIf you use our dataset, please cite our work as follows:" ]
ff658bd927617f848cea994ec59f29e4180936e5
This dataset is fully made with AI. The AI gets information in a PDF, and then create questions and the answers in lists. ```json [ { "question": "What is 1+2", "answer": "1+2 is equals to 3." }, ... ] ```
streamerbtw1002/stringtheory-163KB
[ "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "region:us" ]
2024-02-08T09:06:03+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"]}
2024-02-08T09:12:03+00:00
[]
[ "en" ]
TAGS #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
This dataset is fully made with AI. The AI gets information in a PDF, and then create questions and the answers in lists.
[]
[ "TAGS\n#size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n" ]
7752c206cfc165d5077fcc6fcfff63cb66c0fffc
This dataset is scraped from "ordsprogogtalemaader.dk" (https://ordsprogogtalemaader.dk/talemaader-med-betydning/)
Juunge/danske-talemaader
[ "region:us" ]
2024-02-08T09:06:26+00:00
{"dataset_info": {"features": [{"name": "idiom", "dtype": "string"}, {"name": "meaning", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 81378, "num_examples": 1588}], "download_size": 51021, "dataset_size": 81378}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-10T09:30:28+00:00
[]
[]
TAGS #region-us
This dataset is scraped from "URL" (URL
[]
[ "TAGS\n#region-us \n" ]
18600b9c9c7191906057abbbe5d10fac8341d6cc
# CroissantLLM: A Truly Bilingual French-English Language Model ## Dataset https://arxiv.org/abs/2402.00786 ## Licenses Data redistributed here is subject to the original license under which it was collected. All license information is detailed in the `Data` section of the Technical report. ## Citation ``` @misc{faysse2024croissantllm, title={CroissantLLM: A Truly Bilingual French-English Language Model}, author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo}, year={2024}, eprint={2402.00786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Note Only the `english_660B_11` split is kept hidden for the moment (until release of the Canary paper) but is available upon request !
croissantllm/croissant_dataset
[ "task_categories:translation", "task_categories:text-generation", "task_categories:text2text-generation", "task_categories:fill-mask", "size_categories:100B<n<1T", "language:fr", "language:en", "arxiv:2402.00786", "region:us" ]
2024-02-08T09:27:30+00:00
{"language": ["fr", "en"], "size_categories": ["100B<n<1T"], "task_categories": ["translation", "text-generation", "text2text-generation", "fill-mask"]}
2024-02-15T08:45:32+00:00
[ "2402.00786" ]
[ "fr", "en" ]
TAGS #task_categories-translation #task_categories-text-generation #task_categories-text2text-generation #task_categories-fill-mask #size_categories-100B<n<1T #language-French #language-English #arxiv-2402.00786 #region-us
# CroissantLLM: A Truly Bilingual French-English Language Model ## Dataset URL ## Licenses Data redistributed here is subject to the original license under which it was collected. All license information is detailed in the 'Data' section of the Technical report. ## Note Only the 'english_660B_11' split is kept hidden for the moment (until release of the Canary paper) but is available upon request !
[ "# CroissantLLM: A Truly Bilingual French-English Language Model", "## Dataset\n\nURL", "## Licenses\n\nData redistributed here is subject to the original license under which it was collected. All license information is detailed in the 'Data' section of the Technical report.", "## Note\n\nOnly the 'english_660B_11' split is kept hidden for the moment (until release of the Canary paper) but is available upon request !" ]
[ "TAGS\n#task_categories-translation #task_categories-text-generation #task_categories-text2text-generation #task_categories-fill-mask #size_categories-100B<n<1T #language-French #language-English #arxiv-2402.00786 #region-us \n", "# CroissantLLM: A Truly Bilingual French-English Language Model", "## Dataset\n\nURL", "## Licenses\n\nData redistributed here is subject to the original license under which it was collected. All license information is detailed in the 'Data' section of the Technical report.", "## Note\n\nOnly the 'english_660B_11' split is kept hidden for the moment (until release of the Canary paper) but is available upon request !" ]
bf7db0d9ef7809b4b04d165e83eaa66c41e7ed19
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
novaDE/novaDE
[ "license:apache-2.0", "region:us" ]
2024-02-08T09:30:32+00:00
{"license": "apache-2.0"}
2024-02-08T10:20:38+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
3aa1f3b4ce494cedab20891e363549ad2897d66e
# Rajasthani Hindi Speech Dataset <!-- Provide a quick summary of the dataset. --> This dataset consists of audio recordings of participants reading out stories in Rajasthani Hindi, one sentence at a time. We had 98 participants from Soda, Rajasthan. Each participant read 30 stories. In total, we have 426873 recordings in this dataset. We had roughly 58 male participants and 40 female participants. > **Point to Note:** > While random sampling suggests that most users have to their best effort tried to accurately read out the sentences, we have not performed any quality analysis on the data. There could be errors in some of the recordings. <!-- Provide a longer summary of what this dataset is. --> ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Link:** [Download](https://www.microsoft.com/en-gb/download/details.aspx?id=105385) - **Curated By:** [Kalika Bali](https://www.microsoft.com/en-us/research/people/kalikab/downloads/) ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> Contains two headers: audio and sentence containing the Audio file and sentence respectively.
severo/speech-rj-hi
[ "task_categories:text-to-speech", "task_categories:automatic-speech-recognition", "size_categories:100K<n<1M", "language:hi", "license:mit", "region:us" ]
2024-02-08T09:58:40+00:00
{"language": ["hi"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-to-speech", "automatic-speech-recognition"], "pretty_name": "Rajasthani Speech Dataset", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3672926800.4989805, "num_examples": 422603}, {"name": "test", "num_bytes": 36510981.394019544, "num_examples": 4269}], "download_size": 2808288472, "dataset_size": 3709437781.893}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-02-08T09:58:40+00:00
[]
[ "hi" ]
TAGS #task_categories-text-to-speech #task_categories-automatic-speech-recognition #size_categories-100K<n<1M #language-Hindi #license-mit #region-us
# Rajasthani Hindi Speech Dataset This dataset consists of audio recordings of participants reading out stories in Rajasthani Hindi, one sentence at a time. We had 98 participants from Soda, Rajasthan. Each participant read 30 stories. In total, we have 426873 recordings in this dataset. We had roughly 58 male participants and 40 female participants. > Point to Note: > While random sampling suggests that most users have to their best effort tried to accurately read out the sentences, we have not performed any quality analysis on the data. There could be errors in some of the recordings. ### Dataset Sources - Link: Download - Curated By: Kalika Bali ## Dataset Structure Contains two headers: audio and sentence containing the Audio file and sentence respectively.
[ "# Rajasthani Hindi Speech Dataset\n\n\nThis dataset consists of audio recordings of participants reading out stories in Rajasthani Hindi, one sentence at a time. We had 98 participants from Soda, Rajasthan. Each participant read 30 stories. In total, we have 426873 recordings in this dataset. We had roughly 58 male participants and 40 female participants.\n\n> Point to Note:\n> While random sampling suggests that most users have to their best effort tried to accurately read out the sentences, we have not performed any quality analysis on the data. There could be errors in some of the recordings.", "### Dataset Sources\n\n\n\n- Link: Download\n- Curated By: Kalika Bali", "## Dataset Structure\n\n\nContains two headers: audio and sentence containing the Audio file and sentence respectively." ]
[ "TAGS\n#task_categories-text-to-speech #task_categories-automatic-speech-recognition #size_categories-100K<n<1M #language-Hindi #license-mit #region-us \n", "# Rajasthani Hindi Speech Dataset\n\n\nThis dataset consists of audio recordings of participants reading out stories in Rajasthani Hindi, one sentence at a time. We had 98 participants from Soda, Rajasthan. Each participant read 30 stories. In total, we have 426873 recordings in this dataset. We had roughly 58 male participants and 40 female participants.\n\n> Point to Note:\n> While random sampling suggests that most users have to their best effort tried to accurately read out the sentences, we have not performed any quality analysis on the data. There could be errors in some of the recordings.", "### Dataset Sources\n\n\n\n- Link: Download\n- Curated By: Kalika Bali", "## Dataset Structure\n\n\nContains two headers: audio and sentence containing the Audio file and sentence respectively." ]
27cca5d196604c219c3f73ad4ff40b6d42f02694
# Dataset Card for "cowsformer" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Niche-Squad/cowsformer
[ "region:us" ]
2024-02-08T10:15:28+00:00
{"dataset_info": [{"config_name": "1a_angle_t2s", "features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "filename", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64", "length": 4}, {"name": "segmentation", "sequence": {"sequence": "int64"}}]}], "splits": [{"name": "train", "num_bytes": 149882564.0, "num_examples": 504}, {"name": "test", "num_bytes": 8802536.0, "num_examples": 50}], "download_size": 158410499, "dataset_size": 158685100.0}, {"config_name": "1b_angle_s2t", "features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "filename", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64", "length": 4}, {"name": "segmentation", "sequence": {"sequence": "int64"}}]}], "splits": [{"name": "train", "num_bytes": 87617845.0, "num_examples": 500}, {"name": "test", "num_bytes": 14190352.0, "num_examples": 50}], "download_size": 101456370, "dataset_size": 101808197.0}, {"config_name": "2_light", "features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "filename", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64", "length": 4}, {"name": "segmentation", "sequence": {"sequence": "int64"}}]}], "splits": [{"name": "train", "num_bytes": 126559067.0, "num_examples": 500}, {"name": "test", "num_bytes": 10180247.0, "num_examples": 50}], "download_size": 136382890, "dataset_size": 136739314.0}, {"config_name": "3_breed", "features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "filename", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64", "length": 4}, {"name": "segmentation", "sequence": {"sequence": "int64"}}]}], "splits": [{"name": "train", "num_bytes": 61051159.0, "num_examples": 250}, {"name": "test", "num_bytes": 12590846.0, "num_examples": 50}], "download_size": 73380618, "dataset_size": 73642005.0}, {"config_name": "4_all", "features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "filename", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64", "length": 4}, {"name": "segmentation", "sequence": {"sequence": "int64"}}]}], "splits": [{"name": "train", "num_bytes": 237234622.1, "num_examples": 1004}, {"name": "test", "num_bytes": 22992887.0, "num_examples": 100}], "download_size": 259854098, "dataset_size": 260227509.1}], "configs": [{"config_name": "1a_angle_t2s", "data_files": [{"split": "train", "path": "1a_angle_t2s/train-*"}, {"split": "test", "path": "1a_angle_t2s/test-*"}]}, {"config_name": "1b_angle_s2t", "data_files": [{"split": "train", "path": "1b_angle_s2t/train-*"}, {"split": "test", "path": "1b_angle_s2t/test-*"}]}, {"config_name": "2_light", "data_files": [{"split": "train", "path": "2_light/train-*"}, {"split": "test", "path": "2_light/test-*"}]}, {"config_name": "3_breed", "data_files": [{"split": "train", "path": "3_breed/train-*"}, {"split": "test", "path": "3_breed/test-*"}]}, {"config_name": "4_all", "data_files": [{"split": "train", "path": "4_all/train-*"}, {"split": "test", "path": "4_all/test-*"}]}]}
2024-02-08T22:34:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cowsformer" More Information needed
[ "# Dataset Card for \"cowsformer\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cowsformer\"\n\nMore Information needed" ]
53257fe32a6eb5138d1705942c35e46d2853287a
First, I merged instruction and context columns because it's weird to have instructions saying "summarize this" without the passage itself. Then I used Senku-70B Q2 GGUF to rate each example out of 10 using a custom-made prompt based on clarity, completeness, correctness, relevance and formatting. Here are a few examples of below 5 pairs: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6324eabf05bd8a54c6eb1650/kTWL4qWKpadViUgtuu_mp.png) Observations & Thoughts: - There 1734 examples with <6.5 score and 562 examples with <5 score. Around 10% of the dataset looks low quality and/or confusing. - There may be improvement potential with better meta prompt and quantized models. - It took ~15 hours processing all of the dataset using an RTX 3090. - Miqu's Fine-Tune Senku did a good job at only providing numeric answers in instruction mode. Rates with alphabetic characters were removed, which were less than a hundred. Most of the removed content were a bit confusing. Leave a like if human disappear like dinosaur. Also keep in mind that I have no idea what I am doing actually.
Ba2han/databricks-dolly_rated
[ "language:en", "region:us" ]
2024-02-08T10:37:13+00:00
{"language": ["en"]}
2024-02-10T10:12:34+00:00
[]
[ "en" ]
TAGS #language-English #region-us
First, I merged instruction and context columns because it's weird to have instructions saying "summarize this" without the passage itself. Then I used Senku-70B Q2 GGUF to rate each example out of 10 using a custom-made prompt based on clarity, completeness, correctness, relevance and formatting. Here are a few examples of below 5 pairs: !image/png Observations & Thoughts: - There 1734 examples with <6.5 score and 562 examples with <5 score. Around 10% of the dataset looks low quality and/or confusing. - There may be improvement potential with better meta prompt and quantized models. - It took ~15 hours processing all of the dataset using an RTX 3090. - Miqu's Fine-Tune Senku did a good job at only providing numeric answers in instruction mode. Rates with alphabetic characters were removed, which were less than a hundred. Most of the removed content were a bit confusing. Leave a like if human disappear like dinosaur. Also keep in mind that I have no idea what I am doing actually.
[]
[ "TAGS\n#language-English #region-us \n" ]
0f3ed5a51f2361fe81d1de3df09d6c095a5138bb
# 🚀 AI Project Data Card ## Overview 📊 - Value of State and Country NPI Dataset 🌎 - Provides comprehensive insights into healthcare providers across various regions. - Enhances data accuracy and availability for medical research and policy making. - NUCC Specialty File 🔍 - Delivers detailed descriptions of physician specialties. - Includes unique codes for over 100 specialties, facilitating standardized data analysis. # This study into AI discovers cross references from provider data and poses search technology to make it easier to build perrsonalized information on physicians for AI scientists and patients to begin to undderstand how we can assist in building relationships. There are a number of assets here: 1. app.py and requirements.txt for pre processor which curates state and country files with high fidelity and distilledd knowledge using the NPI registry and NUCC taxonomy at scale compiling over 9GB data into a curated data strategy. 2. MN.csv sample state - This provides a snapshot of just Minnesota which is used to Test AI. 3. Zip file with all. To reuse the state and country datasets a zip file is provided to get all the ccurated data using one file. My thoughts are this later becauses the standard for contextual 'brain' files which represent data context around a subject. 4. NUCC specialty file which is the guide to medical and various provider specialties. The description of these is a good keyword prompt data source for understanding specialties. # Pain / Joy / Superpower 1. Pain - When looking for a medical procedure provider it is not easy to know what to look for or who in your area may match. 2. Joy - A new use might be able to consider search needs of patient in keywords, then find personalized physician relationships available in an area. 3. Superpower - This will help patients and referral specialists find personalized optimal physician sets for condition sets ' - an obstetrician, surgeon, or physician that might be optimal for a given patient given that they align to multiple patient requirements. - a test case for this is analyzing the treatments available for cancer which might require multi specialty PCPs. - like a dermatologist that also can perform surgery for cheap since one physician might match the two specialties. # Sources: 1. NUCC Taxonomy: https://www.nucc.org/index.php/code-sets-mainmenu-41/provider-taxonomy-mainmenu-40 2. NPI Registry updated monthly: https://download.cms.gov/nppes/NPI_Files.html ## Physician Lists Library 📚 - Features over 100 state and country-based lists. - Offers contact details and provider specialty references, aiding in network expansion and collaboration. ## Licensing Boards and Certification Exams 📝 - Covers all types of physicians and healthcare professionals. - Medical (USMLE) 🏥 - Dental (NBDE) 🦷 - Nursing (NCLEX) 🩺 - Pharmacy (NAPLEX) 💊 - Plus more for comprehensive coverage. ## Applications and Use Cases 💡 - Building Libraries of Specialists for MoE Information 🏛️ - Supports the creation of specialized knowledge bases. - Facilitates targeted outreach and collaboration among healthcare professionals. - Enhancing Healthcare Directories and Networks 🌐 - Improves the accuracy and reach of healthcare directories. - Enables patients and providers to connect more efficiently. ## Conclusion and Next Steps 🛣️ - This dataset and its associated libraries offer vast potential for healthcare improvement. - Encourages further exploration and development of AI-driven applications in healthcare. _For more details on accessing and utilizing these resources, please refer to the project documentation and code repositories._
awacke1/AllUSPhysiciansNPIbyStateandCountry
[ "license:mit", "region:us" ]
2024-02-08T10:43:32+00:00
{"license": "mit"}
2024-02-08T11:15:52+00:00
[]
[]
TAGS #license-mit #region-us
# AI Project Data Card ## Overview - Value of State and Country NPI Dataset - Provides comprehensive insights into healthcare providers across various regions. - Enhances data accuracy and availability for medical research and policy making. - NUCC Specialty File - Delivers detailed descriptions of physician specialties. - Includes unique codes for over 100 specialties, facilitating standardized data analysis. # This study into AI discovers cross references from provider data and poses search technology to make it easier to build perrsonalized information on physicians for AI scientists and patients to begin to undderstand how we can assist in building relationships. There are a number of assets here: 1. URL and URL for pre processor which curates state and country files with high fidelity and distilledd knowledge using the NPI registry and NUCC taxonomy at scale compiling over 9GB data into a curated data strategy. 2. URL sample state - This provides a snapshot of just Minnesota which is used to Test AI. 3. Zip file with all. To reuse the state and country datasets a zip file is provided to get all the ccurated data using one file. My thoughts are this later becauses the standard for contextual 'brain' files which represent data context around a subject. 4. NUCC specialty file which is the guide to medical and various provider specialties. The description of these is a good keyword prompt data source for understanding specialties. # Pain / Joy / Superpower 1. Pain - When looking for a medical procedure provider it is not easy to know what to look for or who in your area may match. 2. Joy - A new use might be able to consider search needs of patient in keywords, then find personalized physician relationships available in an area. 3. Superpower - This will help patients and referral specialists find personalized optimal physician sets for condition sets ' - an obstetrician, surgeon, or physician that might be optimal for a given patient given that they align to multiple patient requirements. - a test case for this is analyzing the treatments available for cancer which might require multi specialty PCPs. - like a dermatologist that also can perform surgery for cheap since one physician might match the two specialties. # Sources: 1. NUCC Taxonomy: URL 2. NPI Registry updated monthly: URL ## Physician Lists Library - Features over 100 state and country-based lists. - Offers contact details and provider specialty references, aiding in network expansion and collaboration. ## Licensing Boards and Certification Exams - Covers all types of physicians and healthcare professionals. - Medical (USMLE) - Dental (NBDE) - Nursing (NCLEX) - Pharmacy (NAPLEX) - Plus more for comprehensive coverage. ## Applications and Use Cases - Building Libraries of Specialists for MoE Information ️ - Supports the creation of specialized knowledge bases. - Facilitates targeted outreach and collaboration among healthcare professionals. - Enhancing Healthcare Directories and Networks - Improves the accuracy and reach of healthcare directories. - Enables patients and providers to connect more efficiently. ## Conclusion and Next Steps ️ - This dataset and its associated libraries offer vast potential for healthcare improvement. - Encourages further exploration and development of AI-driven applications in healthcare. _For more details on accessing and utilizing these resources, please refer to the project documentation and code repositories._
[ "# AI Project Data Card\n ## Overview \n - Value of State and Country NPI Dataset \n - Provides comprehensive insights into healthcare providers across various regions.\n - Enhances data accuracy and availability for medical research and policy making.\n - NUCC Specialty File \n - Delivers detailed descriptions of physician specialties.\n - Includes unique codes for over 100 specialties, facilitating standardized data analysis.\n\n # This study into AI discovers cross references from provider data and poses search technology to make it easier to build perrsonalized information on physicians for AI scientists and patients to begin to undderstand how we can assist in building relationships.\n\n There are a number of assets here:\n 1. URL and URL for pre processor which curates state and country files with high fidelity and distilledd knowledge using the NPI registry and NUCC taxonomy at scale compiling over 9GB data into a curated data strategy.\n 2. URL sample state - This provides a snapshot of just Minnesota which is used to Test AI.\n 3. Zip file with all. To reuse the state and country datasets a zip file is provided to get all the ccurated data using one file. My thoughts are this later becauses the standard for contextual 'brain' files which represent data context around a subject.\n 4. NUCC specialty file which is the guide to medical and various provider specialties. The description of these is a good keyword prompt data source for understanding specialties.\n\n # Pain / Joy / Superpower\n 1. Pain - When looking for a medical procedure provider it is not easy to know what to look for or who in your area may match.\n 2. Joy - A new use might be able to consider search needs of patient in keywords, then find personalized physician relationships available in an area.\n 3. Superpower - This will help patients and referral specialists find personalized optimal physician sets for condition sets '\n - an obstetrician, surgeon, or physician that might be optimal for a given patient given that they align to multiple patient requirements. \n - a test case for this is analyzing the treatments available for cancer which might require multi specialty PCPs. \n - like a dermatologist that also can perform surgery for cheap since one physician might match the two specialties.\n\n # Sources:\n 1. NUCC Taxonomy: URL\n 2. NPI Registry updated monthly: URL\n \n ## Physician Lists Library \n - Features over 100 state and country-based lists.\n - Offers contact details and provider specialty references, aiding in network expansion and collaboration.\n ## Licensing Boards and Certification Exams \n - Covers all types of physicians and healthcare professionals.\n - Medical (USMLE) \n - Dental (NBDE) \n - Nursing (NCLEX) \n - Pharmacy (NAPLEX) \n - Plus more for comprehensive coverage.\n ## Applications and Use Cases \n - Building Libraries of Specialists for MoE Information ️\n - Supports the creation of specialized knowledge bases.\n - Facilitates targeted outreach and collaboration among healthcare professionals.\n - Enhancing Healthcare Directories and Networks \n - Improves the accuracy and reach of healthcare directories.\n - Enables patients and providers to connect more efficiently.\n ## Conclusion and Next Steps ️\n - This dataset and its associated libraries offer vast potential for healthcare improvement.\n - Encourages further exploration and development of AI-driven applications in healthcare.\n\n _For more details on accessing and utilizing these resources, please refer to the project documentation and code repositories._" ]
[ "TAGS\n#license-mit #region-us \n", "# AI Project Data Card\n ## Overview \n - Value of State and Country NPI Dataset \n - Provides comprehensive insights into healthcare providers across various regions.\n - Enhances data accuracy and availability for medical research and policy making.\n - NUCC Specialty File \n - Delivers detailed descriptions of physician specialties.\n - Includes unique codes for over 100 specialties, facilitating standardized data analysis.\n\n # This study into AI discovers cross references from provider data and poses search technology to make it easier to build perrsonalized information on physicians for AI scientists and patients to begin to undderstand how we can assist in building relationships.\n\n There are a number of assets here:\n 1. URL and URL for pre processor which curates state and country files with high fidelity and distilledd knowledge using the NPI registry and NUCC taxonomy at scale compiling over 9GB data into a curated data strategy.\n 2. URL sample state - This provides a snapshot of just Minnesota which is used to Test AI.\n 3. Zip file with all. To reuse the state and country datasets a zip file is provided to get all the ccurated data using one file. My thoughts are this later becauses the standard for contextual 'brain' files which represent data context around a subject.\n 4. NUCC specialty file which is the guide to medical and various provider specialties. The description of these is a good keyword prompt data source for understanding specialties.\n\n # Pain / Joy / Superpower\n 1. Pain - When looking for a medical procedure provider it is not easy to know what to look for or who in your area may match.\n 2. Joy - A new use might be able to consider search needs of patient in keywords, then find personalized physician relationships available in an area.\n 3. Superpower - This will help patients and referral specialists find personalized optimal physician sets for condition sets '\n - an obstetrician, surgeon, or physician that might be optimal for a given patient given that they align to multiple patient requirements. \n - a test case for this is analyzing the treatments available for cancer which might require multi specialty PCPs. \n - like a dermatologist that also can perform surgery for cheap since one physician might match the two specialties.\n\n # Sources:\n 1. NUCC Taxonomy: URL\n 2. NPI Registry updated monthly: URL\n \n ## Physician Lists Library \n - Features over 100 state and country-based lists.\n - Offers contact details and provider specialty references, aiding in network expansion and collaboration.\n ## Licensing Boards and Certification Exams \n - Covers all types of physicians and healthcare professionals.\n - Medical (USMLE) \n - Dental (NBDE) \n - Nursing (NCLEX) \n - Pharmacy (NAPLEX) \n - Plus more for comprehensive coverage.\n ## Applications and Use Cases \n - Building Libraries of Specialists for MoE Information ️\n - Supports the creation of specialized knowledge bases.\n - Facilitates targeted outreach and collaboration among healthcare professionals.\n - Enhancing Healthcare Directories and Networks \n - Improves the accuracy and reach of healthcare directories.\n - Enables patients and providers to connect more efficiently.\n ## Conclusion and Next Steps ️\n - This dataset and its associated libraries offer vast potential for healthcare improvement.\n - Encourages further exploration and development of AI-driven applications in healthcare.\n\n _For more details on accessing and utilizing these resources, please refer to the project documentation and code repositories._" ]
2246aeb54b54d3a44c1686800dd668a5a4ccf2f2
# Dataset Card for "cryptonite" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
boda/cryptonite
[ "region:us" ]
2024-02-08T11:14:35+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "publisher", "dtype": "string"}, {"name": "date", "dtype": "timestamp[ns]"}, {"name": "author", "dtype": "string"}, {"name": "orientation", "dtype": "string"}, {"name": "clue", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "enumeration", "dtype": "string"}, {"name": "quick", "dtype": "bool"}, {"name": "sub_publisher", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51949570, "num_examples": 470804}, {"name": "val", "num_bytes": 2886129, "num_examples": 26156}, {"name": "test", "num_bytes": 2891443, "num_examples": 26157}], "download_size": 26277347, "dataset_size": 57727142}}
2024-02-08T11:14:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cryptonite" More Information needed
[ "# Dataset Card for \"cryptonite\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cryptonite\"\n\nMore Information needed" ]
4dc68c7dae45998f924104f4dbf600f8705215bb
# Dataset Card for "subreddits" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alhosseini/subreddits
[ "region:us" ]
2024-02-08T11:28:44+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1023110399, "num_examples": 2708763}], "download_size": 529126626, "dataset_size": 1023110399}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-08T11:29:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "subreddits" More Information needed
[ "# Dataset Card for \"subreddits\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"subreddits\"\n\nMore Information needed" ]
f89cb09c49de351ebf8c8f9c36090b8bf7392798
## CulturaY: A Large Cleaned Multilingual Dataset of 75 Languages ### Dataset Summary From the team that brought you [CulutraX](https://huggingface.co/datasets/uonlp/CulturaX), we present CulturaY, another substantial multilingual dataset that applies the same dataset cleaning methodology to the [HPLT v1.1](https://hplt-project.org/datasets/v1.1) dataset. Please note that HPLT v1.2 has also been released and is an alternative verison with different cleaning methodolgies. This data was used in part to train our SOTA Vietnamese model: [Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat). Our annotations and arrangements are licensed under CC-BY-4.0, and we make the data available for fair use machine learning research. But we make no claims as to the underlying copyrights of the work. This data was copied from the HPLT project, which in turn used the data from Common Crawl and the Internet Archive. ### Acknowledgement We thank our collaborators at UONLP - The Natural Language Processing Group at the University of Oregon, and the computing resources of the managers of the Karolina Supercomputers. We also thank our friends at [TurkuNLP](https://turkunlp.org) and [LAION.ai](https://laion.ai) for their support. ### Data Breakdown: There are 75 langauges, with the following breakdown: | | Code | Language | # Documents | # Documents (%) | Size (GB) | |----:|:------|:-------------------|:-------------|:-------|:---------| | 0 | en | English | 523,235,685 | 43.84 | 1244.39 | | 1 | zh | Chinese | 172,023,436 | 14.41 | 290.91 | | 2 | ru | Russian | 59,185,035 | 4.96 | 424.55 | | 3 | es | Spanish | 49,193,764 | 4.12 | 116.20 | | 4 | de | German | 35,204,652 | 2.95 | 78.32 | | 5 | fr | French | 33,063,792 | 2.77 | 69.66 | | 6 | ja | Japanese | 27,641,765 | 2.32 | 74.71 | | 7 | ko | Korean | 26,925,013 | 2.26 | 25.50 | | 8 | it | Italian | 22,396,067 | 1.88 | 48.30 | | 9 | pt | Portuguese | 18,367,640 | 1.54 | 39.09 | | 10 | th | Thai | 16,330,227 | 1.37 | 32.09 | | 11 | da | Danish | 13,547,169 | 1.13 | 18.40 | | 12 | sv | Swedish | 13,049,359 | 1.09 | 19.29 | | 13 | tr | Turkish | 12,659,104 | 1.06 | 29.14 | | 14 | nl | Dutch | 12,454,669 | 1.04 | 22.58 | | 15 | pl | Polish | 12,054,997 | 1.01 | 27.09 | | 16 | hu | Hungarian | 11,939,984 | 1.00 | 17.63 | | 17 | ro | Romanian | 11,578,945 | 0.97 | 18.57 | | 18 | hbs | Serbo-Croatian | 8,880,450 | 0.74 | 14.65 | | 19 | id | Indonesian | 8,473,141 | 0.71 | 16.23 | | 20 | bg | Bulgarian | 6,698,866 | 0.56 | 18.63 | | 21 | el | Greek | 6,674,496 | 0.56 | 29.61 | | 22 | ar | Arabic | 6,427,386 | 0.54 | 28.04 | | 23 | nb | Norwegian Bokmål | 5,925,942 | 0.50 | 10.14 | | 24 | fi | Finnish | 5,379,100 | 0.45 | 10.08 | | 25 | he | Hebrew | 5,320,279 | 0.45 | 12.06 | | 26 | uk | Ukrainian | 5,311,749 | 0.45 | 31.55 | | 27 | cs | Czech | 5,248,678 | 0.44 | 12.83 | | 28 | fa | Persian | 5,111,868 | 0.43 | 26.23 | | 29 | ms | Malay | 4,888,894 | 0.41 | 9.09 | | 30 | sk | Slovak | 4,758,917 | 0.40 | 5.50 | | 31 | ca | Catalan | 4,552,579 | 0.38 | 7.96 | | 32 | vi | Vietnamese | 4,493,567 | 0.38 | 16.95 | | 33 | hi | Hindi | 4,200,330 | 0.35 | 11.56 | | 34 | bn | Bangla | 2,785,980 | 0.23 | 4.76 | | 35 | lt | Lithuanian | 2,509,788 | 0.21 | 3.83 | | 36 | sl | Slovenian | 2,252,359 | 0.19 | 3.21 | | 37 | la | Latin | 2,147,688 | 0.18 | 1.42 | | 38 | et | Estonian | 1,754,719 | 0.15 | 2.88 | | 39 | az | Azerbaijani | 1,554,357 | 0.13 | 1.95 | | 40 | lv | Latvian | 1,469,245 | 0.12 | 2.19 | | 41 | ur | Urdu | 1,251,414 | 0.10 | 2.84 | | 42 | ta | Tamil | 1,128,321 | 0.09 | 7.21 | | 43 | gl | Galician | 1,101,337 | 0.09 | 1.31 | | 44 | sq | Albanian | 1,081,763 | 0.09 | 1.73 | | 45 | ne | Nepali | 860,657 | 0.07 | 1.91 | | 46 | mk | Macedonian | 641,111 | 0.05 | 1.61 | | 47 | af | Afrikaans | 636,976 | 0.05 | 0.77 | | 48 | tl | Filipino | 575,221 | 0.05 | 1.09 | | 49 | sw | Swahili | 571,247 | 0.05 | 0.60 | | 50 | eu | Basque | 559,194 | 0.05 | 0.67 | | 51 | is | Icelandic | 529,777 | 0.04 | 0.81 | | 52 | ka | Georgian | 524,645 | 0.04 | 1.48 | | 53 | hy | Armenian | 519,060 | 0.04 | 1.46 | | 54 | my | Burmese | 513,729 | 0.04 | 1.91 | | 55 | nn | Norwegian Nynorsk | 509,287 | 0.04 | 0.49 | | 56 | ml | Malayalam | 487,912 | 0.04 | 2.02 | | 57 | mn | Mongolian | 448,211 | 0.04 | 1.79 | | 58 | be | Belarusian | 426,194 | 0.04 | 1.48 | | 59 | uz | Uzbek | 423,865 | 0.04 | 1.19 | | 60 | mr | Marathi | 398,138 | 0.03 | 1.28 | | 61 | si | Sinhala | 337,785 | 0.03 | 1.55 | | 62 | te | Telugu | 279,240 | 0.02 | 1.00 | | 63 | kk | Kazakh | 274,770 | 0.02 | 1.07 | | 64 | mt | Maltese | 265,605 | 0.02 | 0.90 | | 65 | so | Somali | 261,100 | 0.02 | 0.24 | | 66 | gu | Gujarati | 242,074 | 0.02 | 0.74 | | 67 | kn | Kannada | 231,260 | 0.02 | 0.71 | | 68 | cy | Welsh | 179,157 | 0.02 | 0.20 | | 69 | ga | Irish | 134,796 | 0.01 | 0.15 | | 70 | tt | Tatar | 131,731 | 0.01 | 0.41 | | 71 | pa | Punjabi | 119,686 | 0.01 | 0.29 | | 72 | eo | Esperanto | 114,598 | 0.01 | 0.17 | | 73 | ps | Pashto | 99,783 | 0.01 | 0.23 | | 74 | ky | Kyrgyz | 86,551 | 0.01 | 0.31 | ### Dataset structure The dataset has a total of 6 columns, including: - 2 columns `text, url` will be the two main columns in this dataset. - the remaining columns `id, document_lang, scores, langs` belong to the original document in the HPLT V1.1 dataset, retained for debugging purposes. and will be removed in the future. Therefore, when using, please only utilize the two columns text and url. ### Citation To cite CulturaY, please use: ``` @misc{nguyen2024culturay, title={CulturaY: A Large Cleaned Multilingual Dataset of 75 Languages}, author={Thuat Nguyen, Huu Nguyen and Thien Huu Nguyen}, year={2024}, } ```
ontocord/CulturaY
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "size_categories:10M<n<100M", "size_categories:100M<n<1B", "size_categories:1B<n<10B", "source_datasets:original", "language:af", "language:ar", "language:az", "language:be", "language:bg", "language:bn", "language:ca", "language:cs", "language:cy", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fr", "language:ga", "language:gl", "language:gu", "language:hbs", "language:he", "language:hi", "language:hu", "language:hy", "language:id", "language:is", "language:it", "language:ja", "language:ka", "language:kk", "language:kn", "language:ko", "language:ky", "language:la", "language:lt", "language:lv", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:my", "language:nb", "language:ne", "language:nl", "language:nn", "language:pa", "language:pl", "language:ps", "language:pt", "language:ro", "language:ru", "language:si", "language:sk", "language:sl", "language:so", "language:sq", "language:sv", "language:sw", "language:ta", "language:te", "language:th", "language:tl", "language:tr", "language:tt", "language:uk", "language:ur", "language:uz", "language:vi", "language:zh", "license:cc-by-4.0", "region:us" ]
2024-02-08T12:10:31+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "ar", "az", "be", "bg", "bn", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "ga", "gl", "gu", "hbs", "he", "hi", "hu", "hy", "id", "is", "it", "ja", "ka", "kk", "kn", "ko", "ky", "la", "lt", "lv", "mk", "ml", "mn", "mr", "ms", "mt", "my", "nb", "ne", "nl", "nn", "pa", "pl", "ps", "pt", "ro", "ru", "si", "sk", "sl", "so", "sq", "sv", "sw", "ta", "te", "th", "tl", "tr", "tt", "uk", "ur", "uz", "vi", "zh"], "license": "cc-by-4.0", "multilinguality": ["multilingual"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M", "10M<n<100M", "100M<n<1B", "1B<n<10B"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "CulturaY", "configs": [{"config_name": "af", "data_files": "af/*.jsonl.zst"}, {"config_name": "ar", "data_files": "ar/*.jsonl.zst"}, {"config_name": "az", "data_files": "az/*.jsonl.zst"}, {"config_name": "be", "data_files": "be/*.jsonl.zst"}, {"config_name": "bg", "data_files": "bg/*.jsonl.zst"}, {"config_name": "bn", "data_files": "bn/*.jsonl.zst"}, {"config_name": "ca", "data_files": "ca/*.jsonl.zst"}, {"config_name": "cs", "data_files": "cs/*.jsonl.zst"}, {"config_name": "cy", "data_files": "cy/*.jsonl.zst"}, {"config_name": "da", "data_files": "da/*.jsonl.zst"}, {"config_name": "de", "data_files": "de/*.jsonl.zst"}, {"config_name": "el", "data_files": "el/*.jsonl.zst"}, {"config_name": "en", "data_files": "en/*.jsonl.zst"}, {"config_name": "eo", "data_files": "eo/*.jsonl.zst"}, {"config_name": "es", "data_files": "es/*.jsonl.zst"}, {"config_name": "et", "data_files": "et/*.jsonl.zst"}, {"config_name": "eu", "data_files": "eu/*.jsonl.zst"}, {"config_name": "fa", "data_files": "fa/*.jsonl.zst"}, {"config_name": "fi", "data_files": "fi/*.jsonl.zst"}, {"config_name": "fr", "data_files": "fr/*.jsonl.zst"}, {"config_name": "ga", "data_files": "ga/*.jsonl.zst"}, {"config_name": "gl", "data_files": "gl/*.jsonl.zst"}, {"config_name": "gu", "data_files": "gu/*.jsonl.zst"}, {"config_name": "hbs", "data_files": "hbs/*.jsonl.zst"}, {"config_name": "he", "data_files": "he/*.jsonl.zst"}, {"config_name": "hi", "data_files": "hi/*.jsonl.zst"}, {"config_name": "hu", "data_files": "hu/*.jsonl.zst"}, {"config_name": "hy", "data_files": "hy/*.jsonl.zst"}, {"config_name": "id", "data_files": "id/*.jsonl.zst"}, {"config_name": "is", "data_files": "is/*.jsonl.zst"}, {"config_name": "it", "data_files": "it/*.jsonl.zst"}, {"config_name": "ja", "data_files": "ja/*.jsonl.zst"}, {"config_name": "ka", "data_files": "ka/*.jsonl.zst"}, {"config_name": "kk", "data_files": "kk/*.jsonl.zst"}, {"config_name": "kn", "data_files": "kn/*.jsonl.zst"}, {"config_name": "ko", "data_files": "ko/*.jsonl.zst"}, {"config_name": "ky", "data_files": "ky/*.jsonl.zst"}, {"config_name": "la", "data_files": "la/*.jsonl.zst"}, {"config_name": "lt", "data_files": "lt/*.jsonl.zst"}, {"config_name": "lv", "data_files": "lv/*.jsonl.zst"}, {"config_name": "mk", "data_files": "mk/*.jsonl.zst"}, {"config_name": "ml", "data_files": "ml/*.jsonl.zst"}, {"config_name": "mn", "data_files": "mn/*.jsonl.zst"}, {"config_name": "mr", "data_files": "mr/*.jsonl.zst"}, {"config_name": "ms", "data_files": "ms/*.jsonl.zst"}, {"config_name": "mt", "data_files": "mt/*.jsonl.zst"}, {"config_name": "my", "data_files": "my/*.jsonl.zst"}, {"config_name": "nb", "data_files": "nb/*.jsonl.zst"}, {"config_name": "ne", "data_files": "ne/*.jsonl.zst"}, {"config_name": "nl", "data_files": "nl/*.jsonl.zst"}, {"config_name": "nn", "data_files": "nn/*.jsonl.zst"}, {"config_name": "pa", "data_files": "pa/*.jsonl.zst"}, {"config_name": "pl", "data_files": "pl/*.jsonl.zst"}, {"config_name": "ps", "data_files": "ps/*.jsonl.zst"}, {"config_name": "pt", "data_files": "pt/*.jsonl.zst"}, {"config_name": "ro", "data_files": "ro/*.jsonl.zst"}, {"config_name": "ru", "data_files": "ru/*.jsonl.zst"}, {"config_name": "si", "data_files": "si/*.jsonl.zst"}, {"config_name": "sk", "data_files": "sk/*.jsonl.zst"}, {"config_name": "sl", "data_files": "sl/*.jsonl.zst"}, {"config_name": "so", "data_files": "so/*.jsonl.zst"}, {"config_name": "sq", "data_files": "sq/*.jsonl.zst"}, {"config_name": "sv", "data_files": "sv/*.jsonl.zst"}, {"config_name": "sw", "data_files": "sw/*.jsonl.zst"}, {"config_name": "ta", "data_files": "ta/*.jsonl.zst"}, {"config_name": "te", "data_files": "te/*.jsonl.zst"}, {"config_name": "th", "data_files": "th/*.jsonl.zst"}, {"config_name": "tl", "data_files": "tl/*.jsonl.zst"}, {"config_name": "tr", "data_files": "tr/*.jsonl.zst"}, {"config_name": "tt", "data_files": "tt/*.jsonl.zst"}, {"config_name": "uk", "data_files": "uk/*.jsonl.zst"}, {"config_name": "ur", "data_files": "ur/*.jsonl.zst"}, {"config_name": "uz", "data_files": "uz/*.jsonl.zst"}, {"config_name": "vi", "data_files": "vi/*.jsonl.zst"}, {"config_name": "zh", "data_files": "zh/*.jsonl.zst"}], "extra_gated_prompt": "By completing the form below, you acknowledge that the provided data is offered as is. Although we anticipate no problems, you accept full responsibility for any repercussions resulting from the use of this data. Furthermore, you agree that the data must not be utilized for malicious or harmful purposes towards humanity.", "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Country": "text", "Usecase": "text", "I have explicitly check with my jurisdiction and I confirm that downloading CulturaY is legal in the country/region where I am located right now, and for the use case that I have described above": "checkbox", "You agree to not attempt to determine the identity of individuals in this dataset": "checkbox"}}
2024-02-17T13:58:24+00:00
[]
[ "af", "ar", "az", "be", "bg", "bn", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "ga", "gl", "gu", "hbs", "he", "hi", "hu", "hy", "id", "is", "it", "ja", "ka", "kk", "kn", "ko", "ky", "la", "lt", "lv", "mk", "ml", "mn", "mr", "ms", "mt", "my", "nb", "ne", "nl", "nn", "pa", "pl", "ps", "pt", "ro", "ru", "si", "sk", "sl", "so", "sq", "sv", "sw", "ta", "te", "th", "tl", "tr", "tt", "uk", "ur", "uz", "vi", "zh" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #size_categories-10M<n<100M #size_categories-100M<n<1B #size_categories-1B<n<10B #source_datasets-original #language-Afrikaans #language-Arabic #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Irish #language-Galician #language-Gujarati #language-Serbo-Croatian #language-Hebrew #language-Hindi #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kazakh #language-Kannada #language-Korean #language-Kirghiz #language-Latin #language-Lithuanian #language-Latvian #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Norwegian Bokmål #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Turkish #language-Tatar #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Chinese #license-cc-by-4.0 #region-us
CulturaY: A Large Cleaned Multilingual Dataset of 75 Languages -------------------------------------------------------------- ### Dataset Summary From the team that brought you CulutraX, we present CulturaY, another substantial multilingual dataset that applies the same dataset cleaning methodology to the HPLT v1.1 dataset. Please note that HPLT v1.2 has also been released and is an alternative verison with different cleaning methodolgies. This data was used in part to train our SOTA Vietnamese model: Vistral-7B-Chat. Our annotations and arrangements are licensed under CC-BY-4.0, and we make the data available for fair use machine learning research. But we make no claims as to the underlying copyrights of the work. This data was copied from the HPLT project, which in turn used the data from Common Crawl and the Internet Archive. ### Acknowledgement We thank our collaborators at UONLP - The Natural Language Processing Group at the University of Oregon, and the computing resources of the managers of the Karolina Supercomputers. We also thank our friends at TurkuNLP and URL for their support. ### Data Breakdown: There are 75 langauges, with the following breakdown: ### Dataset structure The dataset has a total of 6 columns, including: * 2 columns 'text, url' will be the two main columns in this dataset. * the remaining columns 'id, document\_lang, scores, langs' belong to the original document in the HPLT V1.1 dataset, retained for debugging purposes. and will be removed in the future. Therefore, when using, please only utilize the two columns text and url. To cite CulturaY, please use:
[ "### Dataset Summary\n\n\nFrom the team that brought you CulutraX, we present CulturaY, another substantial multilingual dataset that applies the same dataset cleaning methodology to the HPLT v1.1 dataset.\nPlease note that HPLT v1.2 has also been released and is an alternative verison with different cleaning methodolgies.\nThis data was used in part to train our SOTA Vietnamese model: Vistral-7B-Chat.\n\n\nOur annotations and arrangements are licensed under CC-BY-4.0, and we make the data available for fair use machine learning research. \n\nBut we make no claims as to the underlying copyrights of the work. This data was copied from the HPLT project, which in turn used the data from Common Crawl and the Internet Archive.", "### Acknowledgement\n\n\nWe thank our collaborators at UONLP - The Natural Language Processing Group at the University of Oregon, and the computing resources of the managers of the Karolina Supercomputers.\nWe also thank our friends at TurkuNLP and URL for their support.", "### Data Breakdown:\n\n\nThere are 75 langauges, with the following breakdown:", "### Dataset structure\n\n\nThe dataset has a total of 6 columns, including:\n\n\n* 2 columns 'text, url' will be the two main columns in this dataset.\n* the remaining columns 'id, document\\_lang, scores, langs' belong to the original document in the HPLT V1.1 dataset, retained for debugging purposes. and will be removed in the future.\n\n\nTherefore, when using, please only utilize the two columns text and url.\n\n\nTo cite CulturaY, please use:" ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #size_categories-10M<n<100M #size_categories-100M<n<1B #size_categories-1B<n<10B #source_datasets-original #language-Afrikaans #language-Arabic #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Irish #language-Galician #language-Gujarati #language-Serbo-Croatian #language-Hebrew #language-Hindi #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kazakh #language-Kannada #language-Korean #language-Kirghiz #language-Latin #language-Lithuanian #language-Latvian #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Norwegian Bokmål #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Turkish #language-Tatar #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Chinese #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nFrom the team that brought you CulutraX, we present CulturaY, another substantial multilingual dataset that applies the same dataset cleaning methodology to the HPLT v1.1 dataset.\nPlease note that HPLT v1.2 has also been released and is an alternative verison with different cleaning methodolgies.\nThis data was used in part to train our SOTA Vietnamese model: Vistral-7B-Chat.\n\n\nOur annotations and arrangements are licensed under CC-BY-4.0, and we make the data available for fair use machine learning research. \n\nBut we make no claims as to the underlying copyrights of the work. This data was copied from the HPLT project, which in turn used the data from Common Crawl and the Internet Archive.", "### Acknowledgement\n\n\nWe thank our collaborators at UONLP - The Natural Language Processing Group at the University of Oregon, and the computing resources of the managers of the Karolina Supercomputers.\nWe also thank our friends at TurkuNLP and URL for their support.", "### Data Breakdown:\n\n\nThere are 75 langauges, with the following breakdown:", "### Dataset structure\n\n\nThe dataset has a total of 6 columns, including:\n\n\n* 2 columns 'text, url' will be the two main columns in this dataset.\n* the remaining columns 'id, document\\_lang, scores, langs' belong to the original document in the HPLT V1.1 dataset, retained for debugging purposes. and will be removed in the future.\n\n\nTherefore, when using, please only utilize the two columns text and url.\n\n\nTo cite CulturaY, please use:" ]
eb9fbc529f54eff61002cb17ab6eb803692cec82
A dataset prepared for siamese finetuning, to distingush between texts from Legal Contracts text (Majorly SOW, MSA others Legal Algreements, Offer Letters etc) and text scraped from books, news articles, reviews etc --- license: apache-2.0 ---
polestarllp/Siamese_Finetune_MSA_SOW_Contracts
[ "region:us" ]
2024-02-08T12:15:50+00:00
{}
2024-02-08T12:19:21+00:00
[]
[]
TAGS #region-us
A dataset prepared for siamese finetuning, to distingush between texts from Legal Contracts text (Majorly SOW, MSA others Legal Algreements, Offer Letters etc) and text scraped from books, news articles, reviews etc --- license: apache-2.0 ---
[]
[ "TAGS\n#region-us \n" ]
32470f308ea1619f758b9dc0b5596f9cb8cf95e9
# Dataset Card for "ExeBench-Eval-small-gpt3.5-zeroshot-result" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zhangshuoming/ExeBench-Eval-small-gpt3.5-zeroshot-result
[ "region:us" ]
2024-02-08T12:17:48+00:00
{"dataset_info": {"features": [{"name": "c", "dtype": "string"}, {"name": "asm", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 734546, "num_examples": 925}], "download_size": 298806, "dataset_size": 734546}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-08T12:17:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ExeBench-Eval-small-gpt3.5-zeroshot-result" More Information needed
[ "# Dataset Card for \"ExeBench-Eval-small-gpt3.5-zeroshot-result\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ExeBench-Eval-small-gpt3.5-zeroshot-result\"\n\nMore Information needed" ]
b2f301648eda4c5dbe659235589c86ffec702581
1M OpenAI Embeddings: text-embedding-3-large 3072 dimensions + ada-002 1536 dimensions — parallel dataset - Created: February 2024. - Text used for Embedding: title (string) + text (string) - Embedding Model: text-embedding-3-large - This dataset was generated from the first 1M entries of https://huggingface.co/datasets/BeIR/dbpedia-entity, extracted by @KShivendu_ [here](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M)
Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M
[ "task_categories:feature-extraction", "size_categories:1M<n<10M", "language:en", "license:apache-2.0", "region:us" ]
2024-02-08T12:37:57+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["feature-extraction"], "pretty_name": "OpenAI v3 Large 1M", "dataset_info": {"features": [{"name": "_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "text-embedding-ada-002-1536-embedding", "sequence": "float32"}, {"name": "text-embedding-3-large-3072-embedding", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 31115725776, "num_examples": 1000000}], "download_size": 24796927580, "dataset_size": 31115725776}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-09T11:00:59+00:00
[]
[ "en" ]
TAGS #task_categories-feature-extraction #size_categories-1M<n<10M #language-English #license-apache-2.0 #region-us
1M OpenAI Embeddings: text-embedding-3-large 3072 dimensions + ada-002 1536 dimensions — parallel dataset - Created: February 2024. - Text used for Embedding: title (string) + text (string) - Embedding Model: text-embedding-3-large - This dataset was generated from the first 1M entries of URL extracted by @KShivendu_ here
[]
[ "TAGS\n#task_categories-feature-extraction #size_categories-1M<n<10M #language-English #license-apache-2.0 #region-us \n" ]
54ec46522120f06ecf4b82c328c211b48e480a13
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
metythorn/khmerllm-dataset-alpaca-52k-v1
[ "task_categories:question-answering", "task_categories:text-generation", "size_categories:10K<n<100K", "language:km", "license:mit", "region:us" ]
2024-02-08T13:03:09+00:00
{"language": ["km"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-generation"], "pretty_name": "khmerllm-dataset-alpaca-52k-v1"}
2024-02-08T14:12:47+00:00
[]
[ "km" ]
TAGS #task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-Khmer #license-mit #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-Khmer #license-mit #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
3ec68adcb8a561d53d18067a8c3f0a0c8e157f96
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> Tokenizer: mbert Dataset: TASTEset Unshuffled ratio: 1 Shuffled ratio: 0 Drop duplicates: True Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
pgajo/EW-TT-PE_U1_S0_DROP1_mbert
[ "region:us" ]
2024-02-08T13:23:38+00:00
{}
2024-02-08T13:23:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name Tokenizer: mbert Dataset: TASTEset Unshuffled ratio: 1 Shuffled ratio: 0 Drop duplicates: True Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\n\n Tokenizer: mbert\n\n Dataset: TASTEset\n\n Unshuffled ratio: 1\n\n Shuffled ratio: 0\n\n Drop duplicates: True\n\n Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\n\n Tokenizer: mbert\n\n Dataset: TASTEset\n\n Unshuffled ratio: 1\n\n Shuffled ratio: 0\n\n Drop duplicates: True\n\n Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
a941bfdd3d3c1b9163baa67e78617f4b15f4a36c
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> Tokenizer: mbert Dataset: TASTEset Unshuffled ratio: 0 Shuffled ratio: 1 Drop duplicates: True Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
pgajo/EW-TT-PE_U0_S1_DROP1_mbert
[ "region:us" ]
2024-02-08T13:26:02+00:00
{}
2024-02-08T13:26:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name Tokenizer: mbert Dataset: TASTEset Unshuffled ratio: 0 Shuffled ratio: 1 Drop duplicates: True Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\n\n Tokenizer: mbert\n\n Dataset: TASTEset\n\n Unshuffled ratio: 0\n\n Shuffled ratio: 1\n\n Drop duplicates: True\n\n Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\n\n Tokenizer: mbert\n\n Dataset: TASTEset\n\n Unshuffled ratio: 0\n\n Shuffled ratio: 1\n\n Drop duplicates: True\n\n Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
b27d67540dd37645932904fd80e2ac6664d2beb0
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> Tokenizer: mdeberta Dataset: TASTEset Unshuffled ratio: 1 Shuffled ratio: 0 Drop duplicates: True Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
pgajo/EW-TT-PE_U1_S0_DROP1_mdeberta
[ "region:us" ]
2024-02-08T13:52:55+00:00
{}
2024-02-08T13:53:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name Tokenizer: mdeberta Dataset: TASTEset Unshuffled ratio: 1 Shuffled ratio: 0 Drop duplicates: True Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\n\n Tokenizer: mdeberta\n\n Dataset: TASTEset\n\n Unshuffled ratio: 1\n\n Shuffled ratio: 0\n\n Drop duplicates: True\n\n Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\n\n Tokenizer: mdeberta\n\n Dataset: TASTEset\n\n Unshuffled ratio: 1\n\n Shuffled ratio: 0\n\n Drop duplicates: True\n\n Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
36d3ae99771f5d1dc36b38e661a3f99585ce8634
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> Tokenizer: mdeberta Dataset: TASTEset Unshuffled ratio: 0 Shuffled ratio: 1 Drop duplicates: True Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
pgajo/EW-TT-PE_U0_S1_DROP1_mdeberta
[ "region:us" ]
2024-02-08T13:53:19+00:00
{}
2024-02-08T13:53:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name Tokenizer: mdeberta Dataset: TASTEset Unshuffled ratio: 0 Shuffled ratio: 1 Drop duplicates: True Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\n\n Tokenizer: mdeberta\n\n Dataset: TASTEset\n\n Unshuffled ratio: 0\n\n Shuffled ratio: 1\n\n Drop duplicates: True\n\n Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\n\n Tokenizer: mdeberta\n\n Dataset: TASTEset\n\n Unshuffled ratio: 0\n\n Shuffled ratio: 1\n\n Drop duplicates: True\n\n Dataset path = /home/pgajo/working/food/data/EW-TASTE_en-it_DEEPL_localized_uom.json", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
1028930ef487328a076b1021ef7836785337aa26
# Dataset Card for "UT" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiennv/UT
[ "region:us" ]
2024-02-08T14:00:40+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5532505, "num_examples": 7924}], "download_size": 2409051, "dataset_size": 5532505}}
2024-02-08T14:00:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "UT" More Information needed
[ "# Dataset Card for \"UT\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"UT\"\n\nMore Information needed" ]
9d36de6dada6c5a55c4f09c9f0e9539955b5f42c
# Dataset Card for "CA" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiennv/CA
[ "region:us" ]
2024-02-08T14:04:33+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 184381908, "num_examples": 77566}], "download_size": 80284870, "dataset_size": 184381908}}
2024-02-08T14:04:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "CA" More Information needed
[ "# Dataset Card for \"CA\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"CA\"\n\nMore Information needed" ]
bd4216e8ec75185a0f51b5969b8fbf4551d73c30
# Dataset Card for "AB" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiennv/AB
[ "region:us" ]
2024-02-08T14:06:11+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39466749, "num_examples": 25558}], "download_size": 19023106, "dataset_size": 39466749}}
2024-02-08T14:06:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "AB" More Information needed
[ "# Dataset Card for \"AB\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"AB\"\n\nMore Information needed" ]
f26ed27ebf5f0ee9f4cd9013cced0792aaf0ce3b
# Dataset Card for "CM" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiennv/CM
[ "region:us" ]
2024-02-08T14:09:04+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5766783, "num_examples": 4443}], "download_size": 2787540, "dataset_size": 5766783}}
2024-02-08T14:09:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "CM" More Information needed
[ "# Dataset Card for \"CM\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"CM\"\n\nMore Information needed" ]
df262b522b800a41fe4fbde8011e90e29bf77de1
# Dataset Card for "EDE" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiennv/EDE
[ "region:us" ]
2024-02-08T14:09:26+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3445349, "num_examples": 660}], "download_size": 1525877, "dataset_size": 3445349}}
2024-02-08T14:09:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "EDE" More Information needed
[ "# Dataset Card for \"EDE\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"EDE\"\n\nMore Information needed" ]
a22354197a5d62d7a70e0e3d25b27fa6c7bc739e
# Dataset Card for "GCD" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiennv/GCD
[ "region:us" ]
2024-02-08T14:09:40+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10406174, "num_examples": 14928}], "download_size": 5193077, "dataset_size": 10406174}}
2024-02-08T14:09:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "GCD" More Information needed
[ "# Dataset Card for \"GCD\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"GCD\"\n\nMore Information needed" ]
c785f5d79450ec3183e0e53530d4c66af562d7e5
# Dataset Card for "MM" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiennv/MM
[ "region:us" ]
2024-02-08T14:10:15+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56867458, "num_examples": 56448}], "download_size": 23388972, "dataset_size": 56867458}}
2024-02-08T14:10:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "MM" More Information needed
[ "# Dataset Card for \"MM\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"MM\"\n\nMore Information needed" ]
0ba9ccd210734980e5457343d3306348f773c605
# Dataset Card for "PP" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chiennv/PP
[ "region:us" ]
2024-02-08T14:10:45+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30837584, "num_examples": 22280}], "download_size": 15199371, "dataset_size": 30837584}}
2024-02-08T14:10:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "PP" More Information needed
[ "# Dataset Card for \"PP\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"PP\"\n\nMore Information needed" ]
a39c9269b251f3be20484534e884cb882c17d097
# Dataset Card for "financial_sentiment_analysis_train_compilation_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RobertoMCA97/financial_sentiment_analysis_train_compilation_v2
[ "region:us" ]
2024-02-08T14:11:02+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "fingpt_sentiment_train", "path": "data/fingpt_sentiment_train-*"}, {"split": "financial_phrasebank", "path": "data/financial_phrasebank-*"}, {"split": "twitter_financial_news_sentiment", "path": "data/twitter_financial_news_sentiment-*"}, {"split": "auditor_sentiment", "path": "data/auditor_sentiment-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "fingpt_sentiment_train", "num_bytes": 9855228, "num_examples": 76772}, {"name": "financial_phrasebank", "num_bytes": 601485, "num_examples": 4217}, {"name": "twitter_financial_news_sentiment", "num_bytes": 971346, "num_examples": 9543}, {"name": "auditor_sentiment", "num_bytes": 555930, "num_examples": 3877}], "download_size": 7532886, "dataset_size": 11983989}}
2024-02-08T14:11:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "financial_sentiment_analysis_train_compilation_v2" More Information needed
[ "# Dataset Card for \"financial_sentiment_analysis_train_compilation_v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"financial_sentiment_analysis_train_compilation_v2\"\n\nMore Information needed" ]
08143c7b377629483c059f7d7ddf03c80cac24c2
# Dataset Card for "delicous_books" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
stmackcat/delicous_books
[ "region:us" ]
2024-02-08T14:18:06+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 889372, "num_examples": 3346}, {"name": "test", "num_bytes": 10334, "num_examples": 34}], "download_size": 655660, "dataset_size": 899706}}
2024-02-08T14:18:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "delicous_books" More Information needed
[ "# Dataset Card for \"delicous_books\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"delicous_books\"\n\nMore Information needed" ]
94e513abfe057047a922b4db82a3bfe83fbc2d3f
# Dataset Card for Dataset Name Open source articles about Psoriatic arthritis which have been published in reputed journals This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description specifically uses articles which were referenced by international treatment guidelines - **Curated by:** [Devika Dua]] - **Funded by [optional]:** [None] - **Shared by [optional]:** [test case with no public sharing] - **Language(s) (NLP):** [NLP] - **License:** [Private] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [None] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses text generation about psoriatic arthritis treatment recommendations based on information provided ### Direct Use use it to train llm about Psoriatic arthrtis treatment [More Information Needed] ### Out-of-Scope Use do not use it for basing treatment of patients. it is only a theoretical exercise [More Information Needed] ## Dataset Structure Dataset structure is divided into input and output fields. input fields contain labels 'Psoriatic arthritis', 'GRAPPA' etc whereas the output field contains detailed information about the input prompt [More Information Needed] ## Dataset Creation ### Curation Rationale to study llm uses in article generation [More Information Needed] ### Source Data open access articles sourced from reputed medicla journals #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
devika1/PsA_corpus.csv
[ "region:us" ]
2024-02-08T14:22:19+00:00
{}
2024-02-08T15:19:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name Open source articles about Psoriatic arthritis which have been published in reputed journals This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description specifically uses articles which were referenced by international treatment guidelines - Curated by: [Devika Dua]] - Funded by [optional]: [None] - Shared by [optional]: [test case with no public sharing] - Language(s) (NLP): [NLP] - License: [Private] ### Dataset Sources [optional] - Repository: [None] - Paper [optional]: - Demo [optional]: ## Uses text generation about psoriatic arthritis treatment recommendations based on information provided ### Direct Use use it to train llm about Psoriatic arthrtis treatment ### Out-of-Scope Use do not use it for basing treatment of patients. it is only a theoretical exercise ## Dataset Structure Dataset structure is divided into input and output fields. input fields contain labels 'Psoriatic arthritis', 'GRAPPA' etc whereas the output field contains detailed information about the input prompt ## Dataset Creation ### Curation Rationale to study llm uses in article generation ### Source Data open access articles sourced from reputed medicla journals #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\nOpen source articles about Psoriatic arthritis which have been published in reputed journals\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\nspecifically uses articles which were referenced by international treatment guidelines\n\n\n\n- Curated by: [Devika Dua]]\n- Funded by [optional]: [None]\n- Shared by [optional]: [test case with no public sharing]\n- Language(s) (NLP): [NLP]\n- License: [Private]", "### Dataset Sources [optional]\n\n\n\n- Repository: [None]\n- Paper [optional]: \n- Demo [optional]:", "## Uses\n\ntext generation about psoriatic arthritis treatment recommendations based on information provided", "### Direct Use\n\nuse it to train llm about Psoriatic arthrtis treatment", "### Out-of-Scope Use\n\ndo not use it for basing treatment of patients. it is only a theoretical exercise", "## Dataset Structure\n\nDataset structure is divided into input and output fields. input fields contain labels 'Psoriatic arthritis', 'GRAPPA' etc whereas the output field contains detailed information about the input prompt", "## Dataset Creation", "### Curation Rationale\n\nto study llm uses in article generation", "### Source Data\n\nopen access articles sourced from reputed medicla journals", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\nOpen source articles about Psoriatic arthritis which have been published in reputed journals\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\nspecifically uses articles which were referenced by international treatment guidelines\n\n\n\n- Curated by: [Devika Dua]]\n- Funded by [optional]: [None]\n- Shared by [optional]: [test case with no public sharing]\n- Language(s) (NLP): [NLP]\n- License: [Private]", "### Dataset Sources [optional]\n\n\n\n- Repository: [None]\n- Paper [optional]: \n- Demo [optional]:", "## Uses\n\ntext generation about psoriatic arthritis treatment recommendations based on information provided", "### Direct Use\n\nuse it to train llm about Psoriatic arthrtis treatment", "### Out-of-Scope Use\n\ndo not use it for basing treatment of patients. it is only a theoretical exercise", "## Dataset Structure\n\nDataset structure is divided into input and output fields. input fields contain labels 'Psoriatic arthritis', 'GRAPPA' etc whereas the output field contains detailed information about the input prompt", "## Dataset Creation", "### Curation Rationale\n\nto study llm uses in article generation", "### Source Data\n\nopen access articles sourced from reputed medicla journals", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
280fec613c307ddcbb8b113e734280bc73d4ec5a
# Dataset Card for "test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ghermoso/egtzan_plus
[ "region:us" ]
2024-02-08T15:25:41+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "afro", "1": "classical", "2": "country", "3": "disco", "4": "electro", "5": "jazz", "6": "latin", "7": "metal", "8": "pop", "9": "rap", "10": "reggae", "11": "rock"}}}}], "splits": [{"name": "train", "num_bytes": 128963338.5857826, "num_examples": 1697}, {"name": "test", "num_bytes": 14256351.565217393, "num_examples": 189}], "download_size": 143291941, "dataset_size": 143219690.151}}
2024-02-08T15:25:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test" More Information needed
[ "# Dataset Card for \"test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test\"\n\nMore Information needed" ]
5e9618980bdb9b5366686ccb447440ae4fba22df
!pip install requests-html import requests from bs4 import BeautifulSoup import csv # Function to scrape data from the website def scrape_website(url): # Send a GET request to the URL response = requests.get(url) # Check if the request was successful if response.status_code == 200: # Parse the HTML content soup = BeautifulSoup(response.content, 'html.parser') # Find the press release content press_release_content = soup.find('div', {'id': 'divPressRelease'}) # Extract the title and content title = press_release_content.find('h1').text.strip() content = press_release_content.find('div', {'class': 'pressreldetail'}).text.strip() return title, content else: print("Failed to retrieve data from the website.") return None, None # Main function def main(): # URL of the website to scrape url = 'https://www.pib.gov.in/PressReleasePage.aspx?PRID=1895315' # Scrape data from the website title, content = scrape_website(url) # Write the scraped data to a CSV file if title and content: with open('scraped_data.csv', 'w', newline='', encoding='utf-8') as csvfile: writer = csv.writer(csvfile) writer.writerow(['Title', 'Content']) writer.writerow([title, content]) print("Scraped data has been saved to 'scraped_data.csv'.") else: print("No data was scraped.")
zennn077/India_budget
[ "region:us" ]
2024-02-08T15:27:12+00:00
{}
2024-02-08T15:28:01+00:00
[]
[]
TAGS #region-us
!pip install requests-html import requests from bs4 import BeautifulSoup import csv # Function to scrape data from the website def scrape_website(url): # Send a GET request to the URL response = URL(url) # Check if the request was successful if response.status_code == 200: # Parse the HTML content soup = BeautifulSoup(response.content, 'URL') # Find the press release content press_release_content = URL('div', {'id': 'divPressRelease'}) # Extract the title and content title = press_release_content.find('h1').URL() content = press_release_content.find('div', {'class': 'pressreldetail'}).URL() return title, content else: print("Failed to retrieve data from the website.") return None, None # Main function def main(): # URL of the website to scrape url = 'URL # Scrape data from the website title, content = scrape_website(url) # Write the scraped data to a CSV file if title and content: with open('scraped_data.csv', 'w', newline='', encoding='utf-8') as csvfile: writer = URL(csvfile) writer.writerow(['Title', 'Content']) writer.writerow([title, content]) print("Scraped data has been saved to 'scraped_data.csv'.") else: print("No data was scraped.")
[ "# Function to scrape data from the website\ndef scrape_website(url):\n # Send a GET request to the URL\n response = URL(url)\n\n # Check if the request was successful\n if response.status_code == 200:\n # Parse the HTML content\n soup = BeautifulSoup(response.content, 'URL')\n\n # Find the press release content\n press_release_content = URL('div', {'id': 'divPressRelease'})\n\n # Extract the title and content\n title = press_release_content.find('h1').URL()\n content = press_release_content.find('div', {'class': 'pressreldetail'}).URL()\n\n return title, content\n else:\n print(\"Failed to retrieve data from the website.\")\n return None, None", "# Main function\ndef main():\n # URL of the website to scrape\n url = 'URL\n\n # Scrape data from the website\n title, content = scrape_website(url)\n\n # Write the scraped data to a CSV file\n if title and content:\n with open('scraped_data.csv', 'w', newline='', encoding='utf-8') as csvfile:\n writer = URL(csvfile)\n writer.writerow(['Title', 'Content'])\n writer.writerow([title, content])\n\n print(\"Scraped data has been saved to 'scraped_data.csv'.\")\n else:\n print(\"No data was scraped.\")" ]
[ "TAGS\n#region-us \n", "# Function to scrape data from the website\ndef scrape_website(url):\n # Send a GET request to the URL\n response = URL(url)\n\n # Check if the request was successful\n if response.status_code == 200:\n # Parse the HTML content\n soup = BeautifulSoup(response.content, 'URL')\n\n # Find the press release content\n press_release_content = URL('div', {'id': 'divPressRelease'})\n\n # Extract the title and content\n title = press_release_content.find('h1').URL()\n content = press_release_content.find('div', {'class': 'pressreldetail'}).URL()\n\n return title, content\n else:\n print(\"Failed to retrieve data from the website.\")\n return None, None", "# Main function\ndef main():\n # URL of the website to scrape\n url = 'URL\n\n # Scrape data from the website\n title, content = scrape_website(url)\n\n # Write the scraped data to a CSV file\n if title and content:\n with open('scraped_data.csv', 'w', newline='', encoding='utf-8') as csvfile:\n writer = URL(csvfile)\n writer.writerow(['Title', 'Content'])\n writer.writerow([title, content])\n\n print(\"Scraped data has been saved to 'scraped_data.csv'.\")\n else:\n print(\"No data was scraped.\")" ]
b61068c558ac3c6a082d3692cc6bf94d523aa685
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> ENG: This only me that provide my own data. FR : C'est juste moi qui vais remplir le dataset avec mes infos et ma manière de faire. #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
mikoube/pentest
[ "task_categories:text-generation", "task_categories:summarization", "size_categories:1K<n<10K", "language:fr", "language:en", "license:apache-2.0", "CTF", "pentesting", "region:us" ]
2024-02-08T15:40:51+00:00
{"language": ["fr", "en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "summarization"], "tags": ["CTF", "pentesting"]}
2024-02-08T15:50:49+00:00
[]
[ "fr", "en" ]
TAGS #task_categories-text-generation #task_categories-summarization #size_categories-1K<n<10K #language-French #language-English #license-apache-2.0 #CTF #pentesting #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data ENG: This only me that provide my own data. FR : C'est juste moi qui vais remplir le dataset avec mes infos et ma manière de faire. #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\n\nENG: This only me that provide my own data. \nFR : C'est juste moi qui vais remplir le dataset avec mes infos et ma manière de faire.", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#task_categories-text-generation #task_categories-summarization #size_categories-1K<n<10K #language-French #language-English #license-apache-2.0 #CTF #pentesting #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\n\nENG: This only me that provide my own data. \nFR : C'est juste moi qui vais remplir le dataset avec mes infos et ma manière de faire.", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
f3c9c079172114157e98a31629e16caa30678a8d
### SentenceRex This is a dataset for training zero-shot and few-shot sentence level relation extraction models. The dataset was created with a distant supervision technique from Wikipedia. After that, labels were manually checked to be logically consistent with a sentence. Overall, it consists of **847** unique relations. Each entity between which there is a relation is tagged in the following way: <e1></e1> for the source entity and <e2></e2> for the target entity. `labels` column indicates the relation name. ### Feedback We value your input! Share your feedback and suggestions to help us improve our models and datasets. Fill out the feedback [form](https://forms.gle/5CPFFuLzNWznjcpL7) ### Join Our Discord Connect with our community on Discord for news, support, and discussion about our models and datasets. Join [Discord](https://discord.gg/mfZfwjpB)
knowledgator/sentence_rex
[ "task_categories:text-classification", "task_categories:text2text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "text classification", "relation extraction", "region:us" ]
2024-02-08T15:41:22+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "text2text-generation"], "tags": ["text classification", "relation extraction"]}
2024-02-08T15:48:15+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #text classification #relation extraction #region-us
### SentenceRex This is a dataset for training zero-shot and few-shot sentence level relation extraction models. The dataset was created with a distant supervision technique from Wikipedia. After that, labels were manually checked to be logically consistent with a sentence. Overall, it consists of 847 unique relations. Each entity between which there is a relation is tagged in the following way: <e1></e1> for the source entity and <e2></e2> for the target entity. 'labels' column indicates the relation name. ### Feedback We value your input! Share your feedback and suggestions to help us improve our models and datasets. Fill out the feedback form ### Join Our Discord Connect with our community on Discord for news, support, and discussion about our models and datasets. Join Discord
[ "### SentenceRex\nThis is a dataset for training zero-shot and few-shot sentence level relation extraction models.\n\nThe dataset was created with a distant supervision technique from Wikipedia. \n\nAfter that, labels were manually checked to be logically consistent with a sentence. Overall, it consists of 847 unique relations. \n\nEach entity between which there is a relation is tagged in the following way: <e1></e1> for the source entity and <e2></e2> for the target entity.\n\n'labels' column indicates the relation name.", "### Feedback\nWe value your input! Share your feedback and suggestions to help us improve our models and datasets.\nFill out the feedback form", "### Join Our Discord\nConnect with our community on Discord for news, support, and discussion about our models and datasets.\nJoin Discord" ]
[ "TAGS\n#task_categories-text-classification #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #text classification #relation extraction #region-us \n", "### SentenceRex\nThis is a dataset for training zero-shot and few-shot sentence level relation extraction models.\n\nThe dataset was created with a distant supervision technique from Wikipedia. \n\nAfter that, labels were manually checked to be logically consistent with a sentence. Overall, it consists of 847 unique relations. \n\nEach entity between which there is a relation is tagged in the following way: <e1></e1> for the source entity and <e2></e2> for the target entity.\n\n'labels' column indicates the relation name.", "### Feedback\nWe value your input! Share your feedback and suggestions to help us improve our models and datasets.\nFill out the feedback form", "### Join Our Discord\nConnect with our community on Discord for news, support, and discussion about our models and datasets.\nJoin Discord" ]
78657b318768398eef3319aa6e526f3d320277de
<p align="center"> <img src="assets/figures/opentom_logo.png" width="480"> </p> <span style="color:red;" align="center;">Please avoid testing OpenToM questions in OpenAI playground or places where the data might be used for LLM training.</span> OpenToM is a new benchmark for assessing LLMs' Neural Theory-of-Mind (N-ToM) with the following key features: (1) longer and clearer narrative stories (2) characters with explicit personality traits (3) actions that are triggered by character intentions (4) questions designed to challenge LLMs' capabilities of modeling characters' mental states of both the physical and psychological world. ## Dataset Details The OpenToM benchmark contains 696 narratives, 596 of which are narratives of normal length (average word count: 194.3 words) and 100 of which are long narratives (average word count: 491.6 words). Each of the narrative is followed with 23 ToM questions, making a total of 16008 questions. The OpenToM benchmark pose first-order and second-order questions in the following genres: 1. **Location**: this is a prevelant type of question seen in many ToM benchmarks. We break location questions into *coarse* and *fine*, differ by granularity. *Coarse* questions ask if a character thinks that an entity is in its initial location where as *fine* questions ask the precise location of an entity. 2. **Multihop**: we compose questions that demand an additional reasoning hop on top of the *Location* questions. Specifically, we inquire characters' perception of the *fullness* and the *accessibility* of an entity. We incoporate **social commonsense** in the *accessibility* questions. For instance, if an entity is moved into someone's bag, then it beomces *less accessible* to others since people shall not access other's bag without asking for permission. 3. **Attitude**: LLMs' capability of understanding character's perception of the psychological world has been overlooked by many established N-ToM benchmarks. We propose the *attitude* question to test LLMs' capabilities in understanding character's attitude towards some events. For instance, if my favorite rubber duck is taken away from me without asking, I would hold a *negative* attitude towards this event. All the OpenToM questions are designed to be a binary or ternary classification task. We recommend using *macro-averaged F1 score* to evaluate LLMs' performance as the labels are not uniformly distributed. ### Dataset Description - **Curated by:** KclNLP - **Funded by [optional]:** KclNLP - **Language(s) (NLP):** English - **License:** [More Information Needed] ### Dataset Generating Process <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/seacowx/OpenToM - **Paper:** https://arxiv.org/pdf/2402.06044.pdf ## Uses The OpenToM dataset is designed to benchmark the performance of LLMs. **It shall not be used for training or fine-tuning. Therefore, <span style="color:red">please avoid testing OpenToM questions in OpenAI playground or places where the data might be used for LLM training.</span>** ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> `opentom.json` contains the QA data with 13,708 questions derived from 596 OpenToM stories of normal length `opentom_long.json` contains the QA data with 2,300 questions derived from 100 OpenToM long stories To access individual question types, nevigate to the **`opentom_data`** folder, in which there is a `metadata.json / metadata_long.json` file containing the metadata of OpenToM. The other JSON files store OpenToM questions of each genre asked for either first-order (fo) or second-order (so) ToM. - `location_cg_fo`: Coarse location questions asking about characters' belief of whether an entity is in its initial location (First-Order). - `location_cg_so`: Coarse location questions asking about characters' belief of whether another character believes that an entity is in its initial location (Second-Order) - `location_fg_fo`: Fine location questions asking about characters' belief of the precise location of an entity (First-Order). - `location_fg_so`: Fine location questions asking about characters' belief of another character's belief of the precise location of an entity (Second-Order). - `multihop_fo`: Multihop questions that requesting additional reasoning hops based on location questions (First-Order). - `multihop_so`: Multihop questions that requesting additional reasoning hops based on location questions (Second-Order). - `attitude`: Questions inquire about characters' attitude towards others' actions. Each metadata contains the following information: - `plot`: stores the OpenToM plot used to produce an OpenToM story. - `plot_info`: stores the key information in OpenToM plot, which include the two protangonists, the entity-of-interest, and the two containers. - `preferences`: stores the first-order and second-order preference belief of the characters. - `personality`: stores the presonality trait of the *mover*. - `sentiment_statement`: stores the *mover*'s latent sentiment towards the entity-of-interest. - `true_sentiment`: stores the *mover*'s latent sentiment towards the entity-of-interest. - `intention`: stores the *mover*'s latent intention towards the entity-of-interest. - `new_location`: the new location (fine-grained) of the entity. - `observed`: documents whether the *observer* witnessed the *mover*'s action. - `narrative`: the OpenToM narrative. ## Dataset Creation ![alt text](assets/figures/data_gen_process.png "The Complete OpenToM Data Generation Pipeline") ## Acknowledgement Part of the contents of our story generation plots are derived from the [ToMi dataset](https://github.com/facebookresearch/ToMi). We wish to thank them for generously making the ToMi dataset publicaly available. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The drafts of OpenToM stories are composed using LLMs. Although some of the stories went through human revision, we acknowledge that the texts generated by LLMs could contain biases and lack lexical diversity. ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> If you find our benchmark useful, please cite our work: **BibTeX:** ``` @article{xu2024opentom, title={OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models}, author={Xu, Hainiu and Zhao, Runcong and Zhu, Lixing and Du, Jinhua and He, Yulan}, journal={arXiv preprint arXiv:2402.06044}, year={2024} } ``` ## Dataset Card Contact For any question or inquiry about the OpenToM benchmark, please email [[email protected]](mailto:[email protected]) <p align="center"> <img src="assets/figures/KCLNLP.png" width="256"> </p>
SeacowX/OpenToM
[ "task_categories:question-answering", "task_categories:text-classification", "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "arxiv:2402.06044", "region:us" ]
2024-02-08T17:25:58+00:00
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-classification", "text-generation"], "pretty_name": "OpenToM", "configs": [{"config_name": "default", "data_files": [{"split": "Long", "path": "opentom.json"}, {"split": "ExtraLong", "path": "opentom_long.json"}]}]}
2024-02-14T12:56:13+00:00
[ "2402.06044" ]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text-classification #task_categories-text-generation #size_categories-10K<n<100K #language-English #arxiv-2402.06044 #region-us
<p align="center"> <img src="assets/figures/opentom_logo.png" width="480"> </p> <span style="color:red;" align="center;">Please avoid testing OpenToM questions in OpenAI playground or places where the data might be used for LLM training.</span> OpenToM is a new benchmark for assessing LLMs' Neural Theory-of-Mind (N-ToM) with the following key features: (1) longer and clearer narrative stories (2) characters with explicit personality traits (3) actions that are triggered by character intentions (4) questions designed to challenge LLMs' capabilities of modeling characters' mental states of both the physical and psychological world. ## Dataset Details The OpenToM benchmark contains 696 narratives, 596 of which are narratives of normal length (average word count: 194.3 words) and 100 of which are long narratives (average word count: 491.6 words). Each of the narrative is followed with 23 ToM questions, making a total of 16008 questions. The OpenToM benchmark pose first-order and second-order questions in the following genres: 1. Location: this is a prevelant type of question seen in many ToM benchmarks. We break location questions into *coarse* and *fine*, differ by granularity. *Coarse* questions ask if a character thinks that an entity is in its initial location where as *fine* questions ask the precise location of an entity. 2. Multihop: we compose questions that demand an additional reasoning hop on top of the *Location* questions. Specifically, we inquire characters' perception of the *fullness* and the *accessibility* of an entity. We incoporate social commonsense in the *accessibility* questions. For instance, if an entity is moved into someone's bag, then it beomces *less accessible* to others since people shall not access other's bag without asking for permission. 3. Attitude: LLMs' capability of understanding character's perception of the psychological world has been overlooked by many established N-ToM benchmarks. We propose the *attitude* question to test LLMs' capabilities in understanding character's attitude towards some events. For instance, if my favorite rubber duck is taken away from me without asking, I would hold a *negative* attitude towards this event. All the OpenToM questions are designed to be a binary or ternary classification task. We recommend using *macro-averaged F1 score* to evaluate LLMs' performance as the labels are not uniformly distributed. ### Dataset Description - Curated by: KclNLP - Funded by [optional]: KclNLP - Language(s) (NLP): English - License: ### Dataset Generating Process - Repository: URL - Paper: URL ## Uses The OpenToM dataset is designed to benchmark the performance of LLMs. It shall not be used for training or fine-tuning. Therefore, <span style="color:red">please avoid testing OpenToM questions in OpenAI playground or places where the data might be used for LLM training.</span> ## Dataset Structure 'URL' contains the QA data with 13,708 questions derived from 596 OpenToM stories of normal length 'opentom_long.json' contains the QA data with 2,300 questions derived from 100 OpenToM long stories To access individual question types, nevigate to the 'opentom_data' folder, in which there is a 'URL / metadata_long.json' file containing the metadata of OpenToM. The other JSON files store OpenToM questions of each genre asked for either first-order (fo) or second-order (so) ToM. - 'location_cg_fo': Coarse location questions asking about characters' belief of whether an entity is in its initial location (First-Order). - 'location_cg_so': Coarse location questions asking about characters' belief of whether another character believes that an entity is in its initial location (Second-Order) - 'location_fg_fo': Fine location questions asking about characters' belief of the precise location of an entity (First-Order). - 'location_fg_so': Fine location questions asking about characters' belief of another character's belief of the precise location of an entity (Second-Order). - 'multihop_fo': Multihop questions that requesting additional reasoning hops based on location questions (First-Order). - 'multihop_so': Multihop questions that requesting additional reasoning hops based on location questions (Second-Order). - 'attitude': Questions inquire about characters' attitude towards others' actions. Each metadata contains the following information: - 'plot': stores the OpenToM plot used to produce an OpenToM story. - 'plot_info': stores the key information in OpenToM plot, which include the two protangonists, the entity-of-interest, and the two containers. - 'preferences': stores the first-order and second-order preference belief of the characters. - 'personality': stores the presonality trait of the *mover*. - 'sentiment_statement': stores the *mover*'s latent sentiment towards the entity-of-interest. - 'true_sentiment': stores the *mover*'s latent sentiment towards the entity-of-interest. - 'intention': stores the *mover*'s latent intention towards the entity-of-interest. - 'new_location': the new location (fine-grained) of the entity. - 'observed': documents whether the *observer* witnessed the *mover*'s action. - 'narrative': the OpenToM narrative. ## Dataset Creation !alt text ## Acknowledgement Part of the contents of our story generation plots are derived from the ToMi dataset. We wish to thank them for generously making the ToMi dataset publicaly available. ## Bias, Risks, and Limitations The drafts of OpenToM stories are composed using LLMs. Although some of the stories went through human revision, we acknowledge that the texts generated by LLMs could contain biases and lack lexical diversity. If you find our benchmark useful, please cite our work: BibTeX: ## Dataset Card Contact For any question or inquiry about the OpenToM benchmark, please email URL@URL <p align="center"> <img src="assets/figures/URL" width="256"> </p>
[ "## Dataset Details\n\nThe OpenToM benchmark contains 696 narratives, 596 of which are narratives of normal length (average word count: 194.3 words) and 100 of which are long narratives (average word count: 491.6 words).\nEach of the narrative is followed with 23 ToM questions, making a total of 16008 questions.\nThe OpenToM benchmark pose first-order and second-order questions in the following genres:\n1. Location: this is a prevelant type of question seen in many ToM benchmarks. We break location questions into *coarse* and *fine*, differ by granularity. *Coarse* questions ask if a character thinks that an entity is in its initial location where as *fine* questions ask the precise location of an entity.\n2. Multihop: we compose questions that demand an additional reasoning hop on top of the *Location* questions. Specifically, we inquire characters' perception of the *fullness* and the *accessibility* of an entity. We incoporate social commonsense in the *accessibility* questions. For instance, if an entity is moved into someone's bag, then it beomces *less accessible* to others since people shall not access other's bag without asking for permission.\n3. Attitude: LLMs' capability of understanding character's perception of the psychological world has been overlooked by many established N-ToM benchmarks. We propose the *attitude* question to test LLMs' capabilities in understanding character's attitude towards some events. For instance, if my favorite rubber duck is taken away from me without asking, I would hold a *negative* attitude towards this event.\n\nAll the OpenToM questions are designed to be a binary or ternary classification task. We recommend using *macro-averaged F1 score* to evaluate LLMs' performance as the labels are not uniformly distributed.", "### Dataset Description\n\n- Curated by: KclNLP\n- Funded by [optional]: KclNLP\n- Language(s) (NLP): English\n- License:", "### Dataset Generating Process\n\n\n\n- Repository: URL\n- Paper: URL", "## Uses\n\nThe OpenToM dataset is designed to benchmark the performance of LLMs. It shall not be used for training or fine-tuning. Therefore, <span style=\"color:red\">please avoid testing OpenToM questions in OpenAI playground or places where the data might be used for LLM training.</span>", "## Dataset Structure\n\n\n\n'URL' contains the QA data with 13,708 questions derived from 596 OpenToM stories of normal length\n\n'opentom_long.json' contains the QA data with 2,300 questions derived from 100 OpenToM long stories\n\nTo access individual question types, nevigate to the 'opentom_data' folder, in which there is a 'URL / metadata_long.json' file containing the metadata of OpenToM. The other JSON files store OpenToM questions of each genre asked for either first-order (fo) or second-order (so) ToM. \n- 'location_cg_fo': Coarse location questions asking about characters' belief of whether an entity is in its initial location (First-Order).\n- 'location_cg_so': Coarse location questions asking about characters' belief of whether another character believes that an entity is in its initial location (Second-Order)\n- 'location_fg_fo': Fine location questions asking about characters' belief of the precise location of an entity (First-Order).\n- 'location_fg_so': Fine location questions asking about characters' belief of another character's belief of the precise location of an entity (Second-Order).\n- 'multihop_fo': Multihop questions that requesting additional reasoning hops based on location questions (First-Order).\n- 'multihop_so': Multihop questions that requesting additional reasoning hops based on location questions (Second-Order).\n- 'attitude': Questions inquire about characters' attitude towards others' actions.\n\nEach metadata contains the following information:\n- 'plot': stores the OpenToM plot used to produce an OpenToM story.\n- 'plot_info': stores the key information in OpenToM plot, which include the two protangonists, the entity-of-interest, and the two containers.\n- 'preferences': stores the first-order and second-order preference belief of the characters.\n- 'personality': stores the presonality trait of the *mover*.\n- 'sentiment_statement': stores the *mover*'s latent sentiment towards the entity-of-interest.\n- 'true_sentiment': stores the *mover*'s latent sentiment towards the entity-of-interest.\n- 'intention': stores the *mover*'s latent intention towards the entity-of-interest.\n- 'new_location': the new location (fine-grained) of the entity.\n- 'observed': documents whether the *observer* witnessed the *mover*'s action.\n- 'narrative': the OpenToM narrative.", "## Dataset Creation\n\n!alt text", "## Acknowledgement\nPart of the contents of our story generation plots are derived from the ToMi dataset. We wish to thank them for generously making the ToMi dataset publicaly available.", "## Bias, Risks, and Limitations\n\n\n\nThe drafts of OpenToM stories are composed using LLMs. Although some of the stories went through human revision, we acknowledge that the texts generated by LLMs could contain biases and lack lexical diversity.\n\nIf you find our benchmark useful, please cite our work:\n\nBibTeX:", "## Dataset Card Contact\n\nFor any question or inquiry about the OpenToM benchmark, please email URL@URL\n\n<p align=\"center\">\n <img src=\"assets/figures/URL\" width=\"256\">\n</p>" ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-classification #task_categories-text-generation #size_categories-10K<n<100K #language-English #arxiv-2402.06044 #region-us \n", "## Dataset Details\n\nThe OpenToM benchmark contains 696 narratives, 596 of which are narratives of normal length (average word count: 194.3 words) and 100 of which are long narratives (average word count: 491.6 words).\nEach of the narrative is followed with 23 ToM questions, making a total of 16008 questions.\nThe OpenToM benchmark pose first-order and second-order questions in the following genres:\n1. Location: this is a prevelant type of question seen in many ToM benchmarks. We break location questions into *coarse* and *fine*, differ by granularity. *Coarse* questions ask if a character thinks that an entity is in its initial location where as *fine* questions ask the precise location of an entity.\n2. Multihop: we compose questions that demand an additional reasoning hop on top of the *Location* questions. Specifically, we inquire characters' perception of the *fullness* and the *accessibility* of an entity. We incoporate social commonsense in the *accessibility* questions. For instance, if an entity is moved into someone's bag, then it beomces *less accessible* to others since people shall not access other's bag without asking for permission.\n3. Attitude: LLMs' capability of understanding character's perception of the psychological world has been overlooked by many established N-ToM benchmarks. We propose the *attitude* question to test LLMs' capabilities in understanding character's attitude towards some events. For instance, if my favorite rubber duck is taken away from me without asking, I would hold a *negative* attitude towards this event.\n\nAll the OpenToM questions are designed to be a binary or ternary classification task. We recommend using *macro-averaged F1 score* to evaluate LLMs' performance as the labels are not uniformly distributed.", "### Dataset Description\n\n- Curated by: KclNLP\n- Funded by [optional]: KclNLP\n- Language(s) (NLP): English\n- License:", "### Dataset Generating Process\n\n\n\n- Repository: URL\n- Paper: URL", "## Uses\n\nThe OpenToM dataset is designed to benchmark the performance of LLMs. It shall not be used for training or fine-tuning. Therefore, <span style=\"color:red\">please avoid testing OpenToM questions in OpenAI playground or places where the data might be used for LLM training.</span>", "## Dataset Structure\n\n\n\n'URL' contains the QA data with 13,708 questions derived from 596 OpenToM stories of normal length\n\n'opentom_long.json' contains the QA data with 2,300 questions derived from 100 OpenToM long stories\n\nTo access individual question types, nevigate to the 'opentom_data' folder, in which there is a 'URL / metadata_long.json' file containing the metadata of OpenToM. The other JSON files store OpenToM questions of each genre asked for either first-order (fo) or second-order (so) ToM. \n- 'location_cg_fo': Coarse location questions asking about characters' belief of whether an entity is in its initial location (First-Order).\n- 'location_cg_so': Coarse location questions asking about characters' belief of whether another character believes that an entity is in its initial location (Second-Order)\n- 'location_fg_fo': Fine location questions asking about characters' belief of the precise location of an entity (First-Order).\n- 'location_fg_so': Fine location questions asking about characters' belief of another character's belief of the precise location of an entity (Second-Order).\n- 'multihop_fo': Multihop questions that requesting additional reasoning hops based on location questions (First-Order).\n- 'multihop_so': Multihop questions that requesting additional reasoning hops based on location questions (Second-Order).\n- 'attitude': Questions inquire about characters' attitude towards others' actions.\n\nEach metadata contains the following information:\n- 'plot': stores the OpenToM plot used to produce an OpenToM story.\n- 'plot_info': stores the key information in OpenToM plot, which include the two protangonists, the entity-of-interest, and the two containers.\n- 'preferences': stores the first-order and second-order preference belief of the characters.\n- 'personality': stores the presonality trait of the *mover*.\n- 'sentiment_statement': stores the *mover*'s latent sentiment towards the entity-of-interest.\n- 'true_sentiment': stores the *mover*'s latent sentiment towards the entity-of-interest.\n- 'intention': stores the *mover*'s latent intention towards the entity-of-interest.\n- 'new_location': the new location (fine-grained) of the entity.\n- 'observed': documents whether the *observer* witnessed the *mover*'s action.\n- 'narrative': the OpenToM narrative.", "## Dataset Creation\n\n!alt text", "## Acknowledgement\nPart of the contents of our story generation plots are derived from the ToMi dataset. We wish to thank them for generously making the ToMi dataset publicaly available.", "## Bias, Risks, and Limitations\n\n\n\nThe drafts of OpenToM stories are composed using LLMs. Although some of the stories went through human revision, we acknowledge that the texts generated by LLMs could contain biases and lack lexical diversity.\n\nIf you find our benchmark useful, please cite our work:\n\nBibTeX:", "## Dataset Card Contact\n\nFor any question or inquiry about the OpenToM benchmark, please email URL@URL\n\n<p align=\"center\">\n <img src=\"assets/figures/URL\" width=\"256\">\n</p>" ]
3829b5ae5792df1cd4af032454f2ba0cd9074dfd
### What is this? This is a subset of the aurora-m training data. The data has been clustered into samples by languages and for code. The license to the underlying data is based on the original data source, which is the HPLT dataset, the Pile, Refined Web, Red Pajama 1, and the Stack. We make no claim to any copyrights under the original works, and have only compiled these into clusters for fair use machine learning research. PLEASE USE AT YOUR OWN RISK. WE MAKE NO WARRANTIES AS TO NON-INFRINGEMENT. Please note that some multi-lingual wikipedia articles from red pajama 1 are in the "en" folder. We will need to move those out to their individual language folders. ### Usage: We will write a more full fledged data card, but the user can use the files under each language as a cluster of data for training in context or otherwise.
aurora-m/aurora-m-cluster
[ "license:cc-by-nc-2.0", "region:us" ]
2024-02-08T17:46:29+00:00
{"license": "cc-by-nc-2.0"}
2024-02-13T01:57:02+00:00
[]
[]
TAGS #license-cc-by-nc-2.0 #region-us
### What is this? This is a subset of the aurora-m training data. The data has been clustered into samples by languages and for code. The license to the underlying data is based on the original data source, which is the HPLT dataset, the Pile, Refined Web, Red Pajama 1, and the Stack. We make no claim to any copyrights under the original works, and have only compiled these into clusters for fair use machine learning research. PLEASE USE AT YOUR OWN RISK. WE MAKE NO WARRANTIES AS TO NON-INFRINGEMENT. Please note that some multi-lingual wikipedia articles from red pajama 1 are in the "en" folder. We will need to move those out to their individual language folders. ### Usage: We will write a more full fledged data card, but the user can use the files under each language as a cluster of data for training in context or otherwise.
[ "### What is this?\nThis is a subset of the aurora-m training data. The data has been clustered into samples by languages and for code. \nThe license to the underlying data is based on the original data source, which is the HPLT dataset, the Pile, Refined Web, Red Pajama 1, and the Stack.\n\nWe make no claim to any copyrights under the original works, and have only compiled these into clusters for fair use machine learning research.\n\nPLEASE USE AT YOUR OWN RISK. WE MAKE NO WARRANTIES AS TO NON-INFRINGEMENT.\n\n\nPlease note that some multi-lingual wikipedia articles from red pajama 1 are in the \"en\" folder. We will need to move those out to their individual language folders.", "### Usage:\n\nWe will write a more full fledged data card, but the user can use the files under each language as a cluster of data for training in context or otherwise." ]
[ "TAGS\n#license-cc-by-nc-2.0 #region-us \n", "### What is this?\nThis is a subset of the aurora-m training data. The data has been clustered into samples by languages and for code. \nThe license to the underlying data is based on the original data source, which is the HPLT dataset, the Pile, Refined Web, Red Pajama 1, and the Stack.\n\nWe make no claim to any copyrights under the original works, and have only compiled these into clusters for fair use machine learning research.\n\nPLEASE USE AT YOUR OWN RISK. WE MAKE NO WARRANTIES AS TO NON-INFRINGEMENT.\n\n\nPlease note that some multi-lingual wikipedia articles from red pajama 1 are in the \"en\" folder. We will need to move those out to their individual language folders.", "### Usage:\n\nWe will write a more full fledged data card, but the user can use the files under each language as a cluster of data for training in context or otherwise." ]
113eb2646d00382afc2b51174c0c5d688c292d24
Golfinho 🐬 https://erichartford.com/dolphin Detalhes do conjunto de dados Este conjunto de dados é uma tentativa de replicar os resultados do Orca da Microsoft. Nosso conjunto de dados consiste em: - Aproximadamente 1 milhão de FLANv2 aumentados com completudes GPT-4 (flan1m-alpaca-uncensored.jsonl) - Aproximadamente 3,5 milhões de FLANv2 aumentados com completudes GPT-3.5 (flan5m-alpaca-uncensored.jsonl) Seguimos a distribuição de submix e sistema de estímulo descrita no artigo do Orca. Com algumas exceções. Incluímos todos os 75.000 do CoT no conjunto de dados FLAN-1m em vez de amostrá-lo. Além disso, descobrimos que muitos itens estavam duplicados, então removemos as duplicatas, resultando em 3,5 milhões de instruções no conjunto de dados ChatGPT. Em seguida, filtramos instâncias de alinhamento, recusa, evasão e viés, a fim de produzir um modelo não censurado no qual pode ser aplicada sua personalizada alinhamento LoRA. Distribuição de tokens para completudes GPT-3.5 ![dolphin-llama](https://github.com/shahules786/mayavoz/assets/25312635/0a7bfd05-fadf-4eb6-9111-f44c6e53d95d) Carregando ```python ## carregar dataset = load_dataset("JJhooww/dolphin_ptbr_alpaca_format") ``` Este conjunto de dados possui licença apache-2.0 para uso comercial ou não comercial. Os modelos Dolphin que forem lançados estarão sujeitos à licença do modelo fundamental no qual foram treinados. (Os lançamentos do LLaMA serão não comerciais) Gostaria de agradecer à equipe variada de engenheiros de IA/ML de código aberto que trabalharam ao meu lado nessa empreitada. Incluindo: - Wing "Caseus" Lian e NanoBit do OpenAccess AI Collective - Rohan - Teknium - Pankaj Mathur - Tom "TheBloke" Jobbins por quantizar e amplificar - Agradecimentos especiais a EdenCoder e chirper.ai por mentoria e patrocínio financeiro. - Agradecimentos especiais a Kilkonie por sua mentoria muito valorizada. - Todas as outras pessoas da comunidade de IA de código aberto que me ensinaram e me ajudaram ao longo do caminho.
JJhooww/dolphin_ptbr_alpaca_format
[ "task_categories:text-generation", "size_categories:100K<n<1M", "language:pt", "region:us" ]
2024-02-08T18:12:46+00:00
{"language": ["pt"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"]}
2024-02-12T18:22:42+00:00
[]
[ "pt" ]
TAGS #task_categories-text-generation #size_categories-100K<n<1M #language-Portuguese #region-us
Golfinho URL Detalhes do conjunto de dados Este conjunto de dados é uma tentativa de replicar os resultados do Orca da Microsoft. Nosso conjunto de dados consiste em: - Aproximadamente 1 milhão de FLANv2 aumentados com completudes GPT-4 (URL) - Aproximadamente 3,5 milhões de FLANv2 aumentados com completudes GPT-3.5 (URL) Seguimos a distribuição de submix e sistema de estímulo descrita no artigo do Orca. Com algumas exceções. Incluímos todos os 75.000 do CoT no conjunto de dados FLAN-1m em vez de amostrá-lo. Além disso, descobrimos que muitos itens estavam duplicados, então removemos as duplicatas, resultando em 3,5 milhões de instruções no conjunto de dados ChatGPT. Em seguida, filtramos instâncias de alinhamento, recusa, evasão e viés, a fim de produzir um modelo não censurado no qual pode ser aplicada sua personalizada alinhamento LoRA. Distribuição de tokens para completudes GPT-3.5 !dolphin-llama Carregando Este conjunto de dados possui licença apache-2.0 para uso comercial ou não comercial. Os modelos Dolphin que forem lançados estarão sujeitos à licença do modelo fundamental no qual foram treinados. (Os lançamentos do LLaMA serão não comerciais) Gostaria de agradecer à equipe variada de engenheiros de IA/ML de código aberto que trabalharam ao meu lado nessa empreitada. Incluindo: - Wing "Caseus" Lian e NanoBit do OpenAccess AI Collective - Rohan - Teknium - Pankaj Mathur - Tom "TheBloke" Jobbins por quantizar e amplificar - Agradecimentos especiais a EdenCoder e URL por mentoria e patrocínio financeiro. - Agradecimentos especiais a Kilkonie por sua mentoria muito valorizada. - Todas as outras pessoas da comunidade de IA de código aberto que me ensinaram e me ajudaram ao longo do caminho.
[]
[ "TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Portuguese #region-us \n" ]
4d5200883df788426d9fa609d41328fd2f099bd4
configs: - config_name: train_data data_files: "train.json" - config_name: test_data data_files: "test.json"
melaniasala/PII_Data_Detection
[ "region:us" ]
2024-02-08T18:34:31+00:00
{}
2024-02-08T19:37:19+00:00
[]
[]
TAGS #region-us
configs: - config_name: train_data data_files: "URL" - config_name: test_data data_files: "URL"
[]
[ "TAGS\n#region-us \n" ]
76e22065f8dfbcc61b7a81d03cb584c06f562c80
ShareGPT formated dataset from [andrijdavid/roleplay-conversation](https://huggingface.co/datasets/andrijdavid/roleplay-conversation) It miss some data at the end because I was too lazy to continue, there was too much things to modify, I do it by hand/notepad++/regex lmao
Undi95/andrijdavid_roleplay-conversation-sharegpt
[ "region:us" ]
2024-02-08T20:51:11+00:00
{}
2024-02-08T21:25:39+00:00
[]
[]
TAGS #region-us
ShareGPT formated dataset from andrijdavid/roleplay-conversation It miss some data at the end because I was too lazy to continue, there was too much things to modify, I do it by hand/notepad++/regex lmao
[]
[ "TAGS\n#region-us \n" ]
18f433e612ba55f792ab7a764b22b635dd62c719
Dataset of 12600 answer generations from a [1.4b fine-tuned Pythia policy model](https://huggingface.co/datasets/tlc4418/1.4b-policy_preference_data_gold_labelled), using the [AlpacaFarm dataset](https://huggingface.co/datasets/tatsu-lab/alpaca_farm) 'val' split, and labelled with the AlpacaFarm '[reward-model-human](https://huggingface.co/tatsu-lab/alpaca-farm-reward-model-human-wdiff)' to give 'gold' scores. Used during best-of-n inference in '[Reward Model Ensembles Mitigate Overoptimization](https://arxiv.org/abs/2310.02743)'
tlc4418/12600_gold_labelled_generations
[ "arxiv:2310.02743", "region:us" ]
2024-02-08T21:49:08+00:00
{}
2024-02-13T01:06:50+00:00
[ "2310.02743" ]
[]
TAGS #arxiv-2310.02743 #region-us
Dataset of 12600 answer generations from a 1.4b fine-tuned Pythia policy model, using the AlpacaFarm dataset 'val' split, and labelled with the AlpacaFarm 'reward-model-human' to give 'gold' scores. Used during best-of-n inference in 'Reward Model Ensembles Mitigate Overoptimization'
[]
[ "TAGS\n#arxiv-2310.02743 #region-us \n" ]
c32251b8d07f056494b7f092c5f0abfdf83f1f4b
Preference dataset using labels from the [AlpacaFarm dataset](https://huggingface.co/datasets/tatsu-lab/alpaca_farm), generated answers from a [1.4b fine-tuned Pythia policy model](https://huggingface.co/tlc4418/pythia_1.4b_sft_policy), and labelled using the AlpacaFarm '[reward-model-human](https://github.com/tatsu-lab/alpaca_farm#downloading-pre-tuned-alpacafarm-models)' as a gold reward model. Used to train reward models in '[Reward Model Ensembles Mitigate Overoptimization](https://arxiv.org/abs/2310.02743)'
tlc4418/1.4b-policy_preference_data_gold_labelled
[ "arxiv:2310.02743", "region:us" ]
2024-02-08T22:04:51+00:00
{}
2024-02-12T23:54:39+00:00
[ "2310.02743" ]
[]
TAGS #arxiv-2310.02743 #region-us
Preference dataset using labels from the AlpacaFarm dataset, generated answers from a 1.4b fine-tuned Pythia policy model, and labelled using the AlpacaFarm 'reward-model-human' as a gold reward model. Used to train reward models in 'Reward Model Ensembles Mitigate Overoptimization'
[]
[ "TAGS\n#arxiv-2310.02743 #region-us \n" ]
bab53c76dad5bdeb85a493afd30fbe7c44985026
# Cantonese School Math 0.25M This dataset is Cantonese translation of the Simplified Chinese dataset [BelleGroup/school_math_0.25M](https://huggingface.co/datasets/BelleGroup/school_math_0.25M), please check the original dataset for more information. This dataset is translated by [indiejoseph/bart-translation-zh-yue](https://huggingface.co/indiejoseph/bart-translation-zh-yue) and has not undergone any manual verification. The content may be inaccurate or misleading. please keep this in mind when using this dataset. ## Sample ``` { "instruction": "題目:「「小華手入面有一個裝滿糖嘅袋,一共有12個,小明想知入面有幾粒糖,於是佢問小華:「你手入面嘅糖袋入面有幾粒糖?」」小華答:「有,而且多過10粒。」」請問小華手入面嘅糖袋入面最少有幾粒糖?", "input": "", "output": "由題目可知:小華手入面嘅糖袋入面有偶數個糖;\n又知道小華手入面嘅糖袋入面有多過10粒糖。\n因為糖分係偶數,多10粒,所以糖分最細一定係12。\n所以小華手入邊嘅糖袋最少有12粒糖。" } ``` ## Licensing Information This dataset is provided under the same license as the original dataset: gpl-3.0 ## Limitation and Usage Limits Please check the original dataset for more information.
hon9kon9ize/yue_school_math_0.25M
[ "license:gpl-3.0", "region:us" ]
2024-02-08T22:26:18+00:00
{"license": "gpl-3.0"}
2024-02-08T22:38:27+00:00
[]
[]
TAGS #license-gpl-3.0 #region-us
# Cantonese School Math 0.25M This dataset is Cantonese translation of the Simplified Chinese dataset BelleGroup/school_math_0.25M, please check the original dataset for more information. This dataset is translated by indiejoseph/bart-translation-zh-yue and has not undergone any manual verification. The content may be inaccurate or misleading. please keep this in mind when using this dataset. ## Sample ## Licensing Information This dataset is provided under the same license as the original dataset: gpl-3.0 ## Limitation and Usage Limits Please check the original dataset for more information.
[ "# Cantonese School Math 0.25M\n\nThis dataset is Cantonese translation of the Simplified Chinese dataset BelleGroup/school_math_0.25M, please check the original dataset for more information.\n\nThis dataset is translated by indiejoseph/bart-translation-zh-yue and has not undergone any manual verification. The content may be inaccurate or misleading. please keep this in mind when using this dataset.", "## Sample", "## Licensing Information\n\nThis dataset is provided under the same license as the original dataset: gpl-3.0", "## Limitation and Usage Limits\n\nPlease check the original dataset for more information." ]
[ "TAGS\n#license-gpl-3.0 #region-us \n", "# Cantonese School Math 0.25M\n\nThis dataset is Cantonese translation of the Simplified Chinese dataset BelleGroup/school_math_0.25M, please check the original dataset for more information.\n\nThis dataset is translated by indiejoseph/bart-translation-zh-yue and has not undergone any manual verification. The content may be inaccurate or misleading. please keep this in mind when using this dataset.", "## Sample", "## Licensing Information\n\nThis dataset is provided under the same license as the original dataset: gpl-3.0", "## Limitation and Usage Limits\n\nPlease check the original dataset for more information." ]
559cbc40799a32926d7dd5f847cc1bf82112f88a
A dataset of Polish Wikipedia dumps from November 2023, combined with page views statistics from the three previous months. The intention is to make it easier to filter out the least visited articles, as they may potentially provide lower quality data.
JonaszPotoniec/wikipedia-with-statistics-pl
[ "language:pl", "wikipedia", "region:us" ]
2024-02-08T23:21:14+00:00
{"language": ["pl"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "pageviews", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2962850577, "num_examples": 1587721}], "download_size": 1812521426, "dataset_size": 2962850577}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["wikipedia"]}
2024-02-11T20:31:02+00:00
[]
[ "pl" ]
TAGS #language-Polish #wikipedia #region-us
A dataset of Polish Wikipedia dumps from November 2023, combined with page views statistics from the three previous months. The intention is to make it easier to filter out the least visited articles, as they may potentially provide lower quality data.
[]
[ "TAGS\n#language-Polish #wikipedia #region-us \n" ]
03c62d1739f284e62b5c0e850f529be930b0fe36
# Dataset Card for "iv4-msg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sam-mosaic/iv4-msg
[ "region:us" ]
2024-02-08T23:59:07+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2497963572.0, "num_examples": 433525}, {"name": "test", "num_bytes": 345259991.0, "num_examples": 53935}], "download_size": 1399738698, "dataset_size": 2843223563.0}}
2024-02-15T01:20:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "iv4-msg" More Information needed
[ "# Dataset Card for \"iv4-msg\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"iv4-msg\"\n\nMore Information needed" ]
fb5061bcc8bd93d4b2eecb7c2d6771f32f9eb81e
Only for the reaseaching usage. The original data from http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz. We use `nq_distilbert-base-v1` model encode all the data to the PyTorch Tensors. And `normalize` the embeddings by using `sentence_transformers.util.normalize_embeddings`. ## How to use See notebook [Wikipedia Q&A Retrieval-Semantic Search](https://www.kaggle.com/code/aisuko/wikipedia-q-a-retrieval-semantic-search) ## Installing the package ```python !pip install sentence-transformers==2.3.1 ``` ## The converting process ```python # the whole process takes 1287.0s GPU P100 import os import json import gzip from sentence_tranformers.util import http_get from sentence_transformers import SentenceTransformer from sentence_transformers.util import normalize_embeddings os.environ['DATASET_NAME']='simplewiki-2020-11-01.jsonl.gz' os.environ['DATASET_URL']='http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz' os.environ['MODEL_NAME']='multi-qa-MiniLM-L6-cos-v1' os.environ['CROSS_CODE_NAME']='cross-encoder/ms-marco-MiniLM-L-6-v2' http_get(os.getenv('DATASET_URL'), os.getenv('DATASET_NAME')) passages=[] with gzip.open(os.getenv('DATASET_NAME'), 'rt', encoding='utf-8') as fIn: for line in fIn: data=json.loads(line.strip()) # add all paragraphs # passages.extend(data['paragraphs']) # only add the first paragraph # passages.append(data['paragraph'][0]) for paragraph in data['paragraphs']: # We encode the passages as [title, text] passages.append([data['title'], paragraph]) print('Passages:', len(passages)) bi_encoder=SentenceTransformer('nq-distilbert-base-v1') bi_encoder.max_seq_length=256 bi_encoder.to('cuda') corpus_embeddings=bi_encoder.encode(passages, convert_to_tensor=True, show_progress_bar=True).to('cuda') corpus_embeddings=normalize_embeddings(corpus_embeddings) len(corpus_embeddings) import pandas as pd embedding_data=pd.DataFrame(corpus_embeddings.cpu()) embedding_data.to_csv('simple_english_wikipedia_2020_11_01.csv', index=False) ```
aisuko/simple_english_wikipedia
[ "language:en", "license:mit", "region:us" ]
2024-02-09T01:42:16+00:00
{"language": ["en"], "license": "mit"}
2024-02-09T09:50:34+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #region-us
Only for the reaseaching usage. The original data from URL We use 'nq_distilbert-base-v1' model encode all the data to the PyTorch Tensors. And 'normalize' the embeddings by using 'sentence_transformers.util.normalize_embeddings'. ## How to use See notebook Wikipedia Q&A Retrieval-Semantic Search ## Installing the package ## The converting process
[ "## How to use\n\nSee notebook Wikipedia Q&A Retrieval-Semantic Search", "## Installing the package", "## The converting process" ]
[ "TAGS\n#language-English #license-mit #region-us \n", "## How to use\n\nSee notebook Wikipedia Q&A Retrieval-Semantic Search", "## Installing the package", "## The converting process" ]
581febcc96d48722a4ad2d489905670cbf1abc4f
# Dataset Card for "VNTL-v3.1-1k-q" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lmg-anon/VNTL-v3.1-1k-q
[ "region:us" ]
2024-02-09T03:01:23+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "ignore_loss", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 30490676, "num_examples": 10145}, {"name": "val", "num_bytes": 3800301, "num_examples": 1252}], "download_size": 15146635, "dataset_size": 34290977}}
2024-02-09T03:01:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "VNTL-v3.1-1k-q" More Information needed
[ "# Dataset Card for \"VNTL-v3.1-1k-q\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"VNTL-v3.1-1k-q\"\n\nMore Information needed" ]
0fb26962126f3aadf43591546a14195cb5ad4bd5
# Dataset Card for "safety-utcustom-TRAIN-30" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sam1120/safety-utcustom-TRAIN-30
[ "region:us" ]
2024-02-09T04:19:36+00:00
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "pixel_values", "dtype": "image"}, {"name": "labels", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 82487399.0, "num_examples": 29}], "download_size": 25202645, "dataset_size": 82487399.0}}
2024-02-09T04:20:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "safety-utcustom-TRAIN-30" More Information needed
[ "# Dataset Card for \"safety-utcustom-TRAIN-30\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"safety-utcustom-TRAIN-30\"\n\nMore Information needed" ]
b13b3a5eba598406bdba7108f58722449c5afa8f
# Dataset Card for "cryptonite_filtered_testset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
boda/cryptonite_filtered_testset
[ "region:us" ]
2024-02-09T05:21:13+00:00
{"dataset_info": {"features": [{"name": "publisher", "dtype": "string"}, {"name": "date", "dtype": "timestamp[ns]"}, {"name": "author", "dtype": "string"}, {"name": "orientation", "dtype": "string"}, {"name": "clue", "dtype": "string"}, {"name": "labels", "dtype": "string"}, {"name": "enumeration", "dtype": "string"}, {"name": "quick", "dtype": "bool"}, {"name": "sub_publisher", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1612694.9545819475, "num_examples": 14589}], "download_size": 787300, "dataset_size": 1612694.9545819475}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
2024-02-09T05:21:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cryptonite_filtered_testset" More Information needed
[ "# Dataset Card for \"cryptonite_filtered_testset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cryptonite_filtered_testset\"\n\nMore Information needed" ]
c1d42489734836e0408aece75eb1e8254935047c
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
avinier2/docker-llm-conversations-v1
[ "region:us" ]
2024-02-09T07:41:47+00:00
{}
2024-02-09T07:45:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
272b613d89dae043f8740d082fb906a5cbe5950f
# The Serbian Parliamentary Spoken Dataset ParlaSpeech-RS 1.0 http://hdl.handle.net/11356/1834 The ParlaSpeech-RS dataset is built from the transcripts of parliamentary proceedings available in the Serbian part of the ParlaMint corpus, and the parliamentary recordings available from the Serbian Parliament's YouTube channel. The corpus consists of audio segments that correspond to specific sentences in the transcripts. The transcript contains word-level alignments to the recordings, each instance consisting of character and millisecond start and end offsets, allowing for simple further segmentation of long sentences into shorter segments for ASR and other memory-sensitive applications. Sequences longer than 30 seconds have already been removed from this dataset, which should allow for a simple usage on most modern GPUs. Each segment has an identifier reference to the ParlaMint 4.0 corpus (http://hdl.handle.net/11356/1859) via the utterance ID and character offsets. While in the original dataset all the speaker information from the ParlaMint corpus is available via the `speaker_info` attribute, in the HuggingFace version only a subset of metadata is available, namely: the date, the name of the speaker, their gender, year of birth, party affiliation at that point in time, status of the party at that point in time (coalition or opposition), and party orientation (left, right, centre etc.). Different to the original dataset, this version has also a `text_normalised` attribute, which contains the text with parliamentary comments (`[[Applause]]` and similar) removed. Also, different to the other ParlaSpeech corpora on HuggingFace, this dataset has two additional text columns, `text_cyrillic` and `text_cyrillic_normalised`, with Cyrillic transliteration of the corresponding columns, for simpler downstream usage, given that Serbian is a digraphic language. If you use the dataset, please cite the following paper: ``` @inproceedings{ljubesic-etal-2022-parlaspeech, title = "{P}arla{S}peech-{HR} - a Freely Available {ASR} Dataset for {C}roatian Bootstrapped from the {P}arla{M}int Corpus", author = "Ljube{\v{s}}i{\'c}, Nikola and Kor{\v{z}}inek, Danijel and Rupnik, Peter and Jazbec, Ivo-Pavao", editor = "Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska", booktitle = "Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.parlaclarin-1.16", pages = "111--116", } ```
classla/ParlaSpeech-RS
[ "region:us" ]
2024-02-09T08:05:17+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "text_cyrillic", "dtype": "string"}, {"name": "text_normalised", "dtype": "string"}, {"name": "text_cyrillic_normalised", "dtype": "string"}, {"name": "words", "list": [{"name": "char_e", "dtype": "int64"}, {"name": "char_s", "dtype": "int64"}, {"name": "time_e", "dtype": "float64"}, {"name": "time_s", "dtype": "float64"}]}, {"name": "audio_length", "dtype": "float64"}, {"name": "date", "dtype": "string"}, {"name": "speaker_name", "dtype": "string"}, {"name": "speaker_gender", "dtype": "string"}, {"name": "speaker_birth", "dtype": "string"}, {"name": "speaker_party", "dtype": "string"}, {"name": "party_orientation", "dtype": "string"}, {"name": "party_status", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68987025245.82, "num_examples": 277764}], "download_size": 57663350605, "dataset_size": 68987025245.82}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-09T10:12:30+00:00
[]
[]
TAGS #region-us
# The Serbian Parliamentary Spoken Dataset ParlaSpeech-RS 1.0 URL The ParlaSpeech-RS dataset is built from the transcripts of parliamentary proceedings available in the Serbian part of the ParlaMint corpus, and the parliamentary recordings available from the Serbian Parliament's YouTube channel. The corpus consists of audio segments that correspond to specific sentences in the transcripts. The transcript contains word-level alignments to the recordings, each instance consisting of character and millisecond start and end offsets, allowing for simple further segmentation of long sentences into shorter segments for ASR and other memory-sensitive applications. Sequences longer than 30 seconds have already been removed from this dataset, which should allow for a simple usage on most modern GPUs. Each segment has an identifier reference to the ParlaMint 4.0 corpus (URL via the utterance ID and character offsets. While in the original dataset all the speaker information from the ParlaMint corpus is available via the 'speaker_info' attribute, in the HuggingFace version only a subset of metadata is available, namely: the date, the name of the speaker, their gender, year of birth, party affiliation at that point in time, status of the party at that point in time (coalition or opposition), and party orientation (left, right, centre etc.). Different to the original dataset, this version has also a 'text_normalised' attribute, which contains the text with parliamentary comments ('[[Applause]]' and similar) removed. Also, different to the other ParlaSpeech corpora on HuggingFace, this dataset has two additional text columns, 'text_cyrillic' and 'text_cyrillic_normalised', with Cyrillic transliteration of the corresponding columns, for simpler downstream usage, given that Serbian is a digraphic language. If you use the dataset, please cite the following paper:
[ "# The Serbian Parliamentary Spoken Dataset ParlaSpeech-RS 1.0\n\nURL\n\nThe ParlaSpeech-RS dataset is built from the transcripts of parliamentary proceedings available in the Serbian part of the ParlaMint corpus, and the parliamentary recordings available from the Serbian Parliament's YouTube channel.\n\nThe corpus consists of audio segments that correspond to specific sentences in the transcripts. The transcript contains word-level alignments to the recordings, each instance consisting of character and millisecond start and end offsets, allowing for simple further segmentation of long sentences into shorter segments for ASR and other memory-sensitive applications. Sequences longer than 30 seconds have already been removed from this dataset, which should allow for a simple usage on most modern GPUs.\n\nEach segment has an identifier reference to the ParlaMint 4.0 corpus (URL via the utterance ID and character offsets.\n\nWhile in the original dataset all the speaker information from the ParlaMint corpus is available via the 'speaker_info' attribute, in the HuggingFace version only a subset of metadata is available, namely: the date, the name of the speaker, their gender, year of birth, party affiliation at that point in time, status of the party at that point in time (coalition or opposition), and party orientation (left, right, centre etc.).\n\nDifferent to the original dataset, this version has also a 'text_normalised' attribute, which contains the text with parliamentary comments ('[[Applause]]' and similar) removed. Also, different to the other ParlaSpeech corpora on HuggingFace, this dataset has two additional text columns, 'text_cyrillic' and 'text_cyrillic_normalised', with Cyrillic transliteration of the corresponding columns, for simpler downstream usage, given that Serbian is a digraphic language.\n\nIf you use the dataset, please cite the following paper:" ]
[ "TAGS\n#region-us \n", "# The Serbian Parliamentary Spoken Dataset ParlaSpeech-RS 1.0\n\nURL\n\nThe ParlaSpeech-RS dataset is built from the transcripts of parliamentary proceedings available in the Serbian part of the ParlaMint corpus, and the parliamentary recordings available from the Serbian Parliament's YouTube channel.\n\nThe corpus consists of audio segments that correspond to specific sentences in the transcripts. The transcript contains word-level alignments to the recordings, each instance consisting of character and millisecond start and end offsets, allowing for simple further segmentation of long sentences into shorter segments for ASR and other memory-sensitive applications. Sequences longer than 30 seconds have already been removed from this dataset, which should allow for a simple usage on most modern GPUs.\n\nEach segment has an identifier reference to the ParlaMint 4.0 corpus (URL via the utterance ID and character offsets.\n\nWhile in the original dataset all the speaker information from the ParlaMint corpus is available via the 'speaker_info' attribute, in the HuggingFace version only a subset of metadata is available, namely: the date, the name of the speaker, their gender, year of birth, party affiliation at that point in time, status of the party at that point in time (coalition or opposition), and party orientation (left, right, centre etc.).\n\nDifferent to the original dataset, this version has also a 'text_normalised' attribute, which contains the text with parliamentary comments ('[[Applause]]' and similar) removed. Also, different to the other ParlaSpeech corpora on HuggingFace, this dataset has two additional text columns, 'text_cyrillic' and 'text_cyrillic_normalised', with Cyrillic transliteration of the corresponding columns, for simpler downstream usage, given that Serbian is a digraphic language.\n\nIf you use the dataset, please cite the following paper:" ]
7e13243ec39ad950583174f181b752e7af721e3c
Dados semissintéticos gerados por meio da biblioteca RAG contendo conhecimento de agricultura regenerativa de especialistas do domínio, conectados à API ChatGPT4. Um conjunto de dados que detalha soluções agrícolas regenerativas para problemas agrícolas comuns, em português, com consciência cultural em relação à Floresta Amazônica e às comunidades agrícolas brasileiras marginalizadas. Semi-synthetic data generated via RAG library containing regenerative farming knowledge from domain experts, connected to ChatGPT4 API. A dataset detailing regenerative farming solutions to common farming problems, in Portuguese, with cultural awareness as to the Amazon Rainforest and marginalized Brazilian farming communities. 1st ever Portuguese-language Agriculture/Farming dataset on Hugging Face hub! Created and curated in collaboration between Copyleft Cultivars Nonprofit and Caleb DeLeeuw (Solshine.)
Solshine/Agricultura_regenerativa_Portugues_Portuguese
[ "size_categories:n<1K", "language:pt", "license:mit", "biology", "climate", "region:us" ]
2024-02-09T08:24:04+00:00
{"language": ["pt"], "license": "mit", "size_categories": ["n<1K"], "tags": ["biology", "climate"]}
2024-02-09T19:29:54+00:00
[]
[ "pt" ]
TAGS #size_categories-n<1K #language-Portuguese #license-mit #biology #climate #region-us
Dados semissintéticos gerados por meio da biblioteca RAG contendo conhecimento de agricultura regenerativa de especialistas do domínio, conectados à API ChatGPT4. Um conjunto de dados que detalha soluções agrícolas regenerativas para problemas agrícolas comuns, em português, com consciência cultural em relação à Floresta Amazônica e às comunidades agrícolas brasileiras marginalizadas. Semi-synthetic data generated via RAG library containing regenerative farming knowledge from domain experts, connected to ChatGPT4 API. A dataset detailing regenerative farming solutions to common farming problems, in Portuguese, with cultural awareness as to the Amazon Rainforest and marginalized Brazilian farming communities. 1st ever Portuguese-language Agriculture/Farming dataset on Hugging Face hub! Created and curated in collaboration between Copyleft Cultivars Nonprofit and Caleb DeLeeuw (Solshine.)
[]
[ "TAGS\n#size_categories-n<1K #language-Portuguese #license-mit #biology #climate #region-us \n" ]
032a72dd39df0a2d2a234e481d553e03c8b84ccd
# Arxiv Math This is a cleansed version of [ArtifactAI/arxiv-math-instruct-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-math-instruct-50k) ## Usage ```python from datasets import load_dataset dataset = load_dataset("Sharathhebbar24/arxiv-math-instruct-50k", split="train") ```
Sharathhebbar24/arxiv-math-instruct-50k
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "region:us" ]
2024-02-09T08:30:54+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39374567, "num_examples": 50488}], "download_size": 18735701, "dataset_size": 39374567}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-09T11:24:26+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
# Arxiv Math This is a cleansed version of ArtifactAI/arxiv-math-instruct-50k ## Usage
[ "# Arxiv Math\n\nThis is a cleansed version of ArtifactAI/arxiv-math-instruct-50k", "## Usage" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n", "# Arxiv Math\n\nThis is a cleansed version of ArtifactAI/arxiv-math-instruct-50k", "## Usage" ]
ca5595b8189956ca62af1f6f521d984e355fb77b
# Workout Motivation Entity Dataset ### Dataset Summary This is a WIP synthetic dataset generated by AI for the Workout Domain. - Entity Types: `workout`, `duration`, `frequency`, `number` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` {'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 6, 1, 4, 3, 0, 6, 7], 'id': 0, 'tokens': ['To', improve', 'upper', 'body', 'strength', ',', "I'll", 'include', '90', 'Pushup', 'twice', 'daily', 'for', '30', 'minutes']} ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wnut2017/raw/main/dataset/label.json). ```python { '0': 0 '1': B-Workout '2': I-Workout '3': I-Frequency '4': B-Frequency '5': I-Duration '6': B-Number '7': B-Duration } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |wmout | 620| 78| 77|
alfarruggia/wmout
[ "task_categories:token-classification", "size_categories:n<1K", "language:en", "license:mit", "workout", "motivation", "region:us" ]
2024-02-09T08:33:46+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["token-classification"], "pretty_name": "Workout Motivation Entity Dataset", "tags": ["workout", "motivation"], "configs": [{"config_name": "mwout"}]}
2024-02-09T22:00:08+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #size_categories-n<1K #language-English #license-mit #workout #motivation #region-us
Workout Motivation Entity Dataset ================================= ### Dataset Summary This is a WIP synthetic dataset generated by AI for the Workout Domain. * Entity Types: 'workout', 'duration', 'frequency', 'number' Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Label ID The label2id dictionary can be found at here. ### Data Splits
[ "### Dataset Summary\n\n\nThis is a WIP synthetic dataset generated by AI for the Workout Domain.\n\n\n* Entity Types: 'workout', 'duration', 'frequency', 'number'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Label ID\n\n\nThe label2id dictionary can be found at here.", "### Data Splits" ]
[ "TAGS\n#task_categories-token-classification #size_categories-n<1K #language-English #license-mit #workout #motivation #region-us \n", "### Dataset Summary\n\n\nThis is a WIP synthetic dataset generated by AI for the Workout Domain.\n\n\n* Entity Types: 'workout', 'duration', 'frequency', 'number'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Label ID\n\n\nThe label2id dictionary can be found at here.", "### Data Splits" ]
30a3e00abf2fdc907f5a89f34a19725e77a19973
Only for the researching usage. ## The converting process below. ```python # Setting the env os.environ['DATASET_URL']='http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz' os.environ['MODEL_NAME']='multi-qa-MiniLM-L6-cos-v1' # Loading the dataset import json import gzip from sentence_transformers.util import http_get http_get(os.getenv('DATASET_URL'), os.getenv('DATASET_NAME')) passages=[] with gzip.open(os.getenv('DATASET_NAME'), 'rt', encoding='utf8') as fIn: for line in fIn: data=json.loads(line.strip()) # add all paragraphs # passages.extend(data['paragraphs']) # only add the first paragraph passages.append(data['paragraphs'][0]) # for paragraph in data['paragraphs']: # # We encode the passages as [title, text] # passages.append([data['title'], paragraph]) len(passages) # Loading the model from sentence_transformers import SentenceTransformer bi_encoder=SentenceTransformer(os.getenv('MODEL_NAME')) bi_encoder.max_seq_length=256 bi_encoder.to('cuda') bi_encoder # normalizing the embeddings from sentence_transformers.util import normalize_embeddings corpus_embeddings=bi_encoder.encode(passages, convert_to_tensor=True, show_progress_bar=True).to('cuda') corpus_embeddings=normalize_embeddings(corpus_embeddings) len(corpus_embeddings) # save to the csv file import pandas as pd embeddings_data=pd.DataFrame(corpus_embeddings.cpu()) embeddings_data.to_csv('simple_english_wikipedia.csv', index=False) ```
aisuko/simple_english_wikipedia_p0
[ "language:en", "license:apache-2.0", "region:us" ]
2024-02-09T09:29:42+00:00
{"language": ["en"], "license": "apache-2.0"}
2024-02-10T00:45:18+00:00
[]
[ "en" ]
TAGS #language-English #license-apache-2.0 #region-us
Only for the researching usage. ## The converting process below.
[ "## The converting process below." ]
[ "TAGS\n#language-English #license-apache-2.0 #region-us \n", "## The converting process below." ]
69d23a9df7fef79b755191733984ea527b3cd3a3
# Dataset Card for prompt-collective This dataset has been created with [Argilla](https://docs.argilla.io). As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Dataset Description - **Homepage:** https://argilla.io - **Repository:** https://github.com/argilla-io/argilla - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla. * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.FeedbackDataset.from_huggingface("argilla/prompt-collective") ``` ### Load with `datasets` To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("argilla/prompt-collective") ``` ### Supported Tasks and Leaderboards This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure). There are no leaderboards associated with this dataset. ### Languages [More Information Needed] ## Dataset Structure ### Data in Argilla The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**. The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | prompt | Prompt | text | True | True | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | quality | Rate the quality of the prompt | label_selection | True | N/A | ['0', '1', '2', '3', '4'] | The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata". The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`. | Metadata Name | Title | Type | Values | Visible for Annotators | | ------------- | ----- | ---- | ------ | ---------------------- | The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section. ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "external_id": null, "fields": { "prompt": "alter this api that gets a request like: {\"0\",\"5\",\"2\",\"3\",\"5\",\"5\",\"1\",\"4\",\"4\",\"9\"}\nand then stores it in a variable called answers like: {\"0523551449\"}\nso continue from this code:\napp.get(\"/:user/answers\", (req, res) =\u003e {\n const answers =" }, "metadata": { "evolved_from": null, "kind": "human", "source": "ewof/sharegpt-instruct-unfiltered-deduped" }, "responses": [ { "status": "submitted", "user_id": "99a4bc7d-3e95-4c18-a8f1-26043abf98d5", "values": { "quality": { "value": "4" } } }, { "status": "submitted", "user_id": "0583afc2-2cd8-43b6-a61b-d73dbf2ad9d9", "values": { "quality": { "value": "2" } } } ], "suggestions": [], "vectors": {} } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "external_id": null, "metadata": "{\"source\": \"ewof/sharegpt-instruct-unfiltered-deduped\", \"kind\": \"human\", \"evolved_from\": null}", "prompt": "alter this api that gets a request like: {\"0\",\"5\",\"2\",\"3\",\"5\",\"5\",\"1\",\"4\",\"4\",\"9\"}\nand then stores it in a variable called answers like: {\"0523551449\"}\nso continue from this code:\napp.get(\"/:user/answers\", (req, res) =\u003e {\n const answers =", "quality": [ { "status": "submitted", "user_id": "99a4bc7d-3e95-4c18-a8f1-26043abf98d5", "value": "4" }, { "status": "submitted", "user_id": "0583afc2-2cd8-43b6-a61b-d73dbf2ad9d9", "value": "2" } ], "quality-suggestion": null, "quality-suggestion-metadata": { "agent": null, "score": null, "type": null } } ``` ### Data Fields Among the dataset fields, we differentiate between the following: * **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. * **prompt** is of type `text`. * **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`. * **quality** is of type `label_selection` with the following allowed values ['0', '1', '2', '3', '4']. * **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. * (optional) **quality-suggestion** is of type `label_selection` with the following allowed values ['0', '1', '2', '3', '4']. Additionally, we also have two more fields that are optional and are the following: * **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`. * **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines # Task We are collaboratively creating a database of prompts in English. Our aim is to identify effective prompts and understand the interaction between AI-generated and human-generated prompts. The focus is on functionality and precision. ## Guidelines You need to assign a rating to each prompt thinking about the complexity for an assistant and if the intent is clear. A very good prompt is one that is challenging but also very clear in the intent of the user. You can use keyboard shortcuts (the numbers) to quickly rate the examples. If you find some pattern, you can also use the search box and filters as well as the bulk labelling mode, please use this with care and only when you find a clear pattern (e.g., prompts that are completely incorrect and share a common issue). If you are unsure about your answer, you can click on the tag and then “Save as a draft” to save if for later. In the case that you feel unequipped of rating a specific prompt, you can use the “Discard” button. ## Ratings ### 1. Very Bad: The prompt doesn't communicate its purpose, is non-sensical or is in a language other than English. The prompt assumes the usage of tools or capabilities that don’t apply to this model, like generating an image or scraping a website. *Examples:* >"Do the thing." >“Hello!” >"asdajflajfada” >“Quiero que redactes una entrada de blog.” >"Extract data from a website.” >“Tell me how you feel when someone insults you.” ### 2. Bad: Suggests a goal but lacks clarity and coherence. *Examples:* >"Find me stuff about that thing, you know?" >“Write something.” >"Tell me about this thing." >"Can you help with this?" >"I need to know more." ### 3. Ok: The intent is understandable, but it's missing information to complete the task. *Examples:* >"I need information on something important." >“Write a blogpost.” ### 4. Good: Presents a clear goal and necessary information, effectively directing the AI, but the prompt could be more specific. *Examples:* >"Provide a summary of renewable energy sources." >“Tell me about Sean Connery.” >"Explain global warming." ### 5. Very Good: Comprehensive and explicit, leaving no room for ambiguity. Perfectly guides the AI and includes details. *Examples:* >"Compare the efficiency and environmental impact of solar and wind energy, including recent advancements and case studies from 2023." >“Make a list of 5 plant-based recipes that I can try that don’t have red peppers as an ingredient.” #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
argilla/prompt-collective
[ "size_categories:n<1K", "rlfh", "argilla", "human-feedback", "region:us" ]
2024-02-09T09:46:34+00:00
{"size_categories": "n<1K", "tags": ["rlfh", "argilla", "human-feedback"]}
2024-02-17T15:53:32+00:00
[]
[]
TAGS #size_categories-n<1K #rlfh #argilla #human-feedback #region-us
Dataset Card for prompt-collective ================================== This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'. Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla. * Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'. * The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code: ### Load with 'datasets' To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code: ### Supported Tasks and Leaderboards This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section. There are no leaderboards associated with this dataset. ### Languages Dataset Structure ----------------- ### Data in Argilla The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines. The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking. The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata". The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'. The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section. ### Data Instances An example of a dataset instance in Argilla looks as follows: While the same record in HuggingFace 'datasets' looks as follows: ### Data Fields Among the dataset fields, we differentiate between the following: * Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. + prompt is of type 'text'. * Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'. + quality is of type 'label\_selection' with the following allowed values ['0', '1', '2', '3', '4']. * Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. + (optional) quality-suggestion is of type 'label\_selection' with the following allowed values ['0', '1', '2', '3', '4']. Additionally, we also have two more fields that are optional and are the following: * metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'. * external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is 'train'. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation guidelines Task ==== We are collaboratively creating a database of prompts in English. Our aim is to identify effective prompts and understand the interaction between AI-generated and human-generated prompts. The focus is on functionality and precision. Guidelines ---------- You need to assign a rating to each prompt thinking about the complexity for an assistant and if the intent is clear. A very good prompt is one that is challenging but also very clear in the intent of the user. You can use keyboard shortcuts (the numbers) to quickly rate the examples. If you find some pattern, you can also use the search box and filters as well as the bulk labelling mode, please use this with care and only when you find a clear pattern (e.g., prompts that are completely incorrect and share a common issue). If you are unsure about your answer, you can click on the tag and then “Save as a draft” to save if for later. In the case that you feel unequipped of rating a specific prompt, you can use the “Discard” button. Ratings ------- ### 1. Very Bad: The prompt doesn't communicate its purpose, is non-sensical or is in a language other than English. The prompt assumes the usage of tools or capabilities that don’t apply to this model, like generating an image or scraping a website. *Examples:* > > "Do the thing." > “Hello!” > "asdajflajfada” > “Quiero que redactes una entrada de blog.” > "Extract data from a website.” > “Tell me how you feel when someone insults you.” > > > ### 2. Bad: Suggests a goal but lacks clarity and coherence. *Examples:* > > "Find me stuff about that thing, you know?" > “Write something.” > "Tell me about this thing." > "Can you help with this?" > "I need to know more." > > > ### 3. Ok: The intent is understandable, but it's missing information to complete the task. *Examples:* > > "I need information on something important." > “Write a blogpost.” > > > ### 4. Good: Presents a clear goal and necessary information, effectively directing the AI, but the prompt could be more specific. *Examples:* > > "Provide a summary of renewable energy sources." > “Tell me about Sean Connery.” > "Explain global warming." > > > ### 5. Very Good: Comprehensive and explicit, leaving no room for ambiguity. Perfectly guides the AI and includes details. *Examples:* > > "Compare the efficiency and environmental impact of solar and wind energy, including recent advancements and case studies from 2023." > “Make a list of 5 plant-based recipes that I can try that don’t have red peppers as an ingredient.” > > > #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions
[ "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ prompt is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ quality is of type 'label\\_selection' with the following allowed values ['0', '1', '2', '3', '4'].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) quality-suggestion is of type 'label\\_selection' with the following allowed values ['0', '1', '2', '3', '4'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nTask\n====\n\n\nWe are collaboratively creating a database of prompts in English. Our aim is to identify effective prompts and understand the interaction between AI-generated and human-generated prompts. The focus is on functionality and precision.\n\n\nGuidelines\n----------\n\n\nYou need to assign a rating to each prompt thinking about the complexity for an assistant and if the intent is clear. A very good prompt is one that is challenging but also very clear in the intent of the user.\n\n\nYou can use keyboard shortcuts (the numbers) to quickly rate the examples.\n\n\nIf you find some pattern, you can also use the search box and filters as well as the bulk labelling mode, please use this with care and only when you find a clear pattern (e.g., prompts that are completely incorrect and share a common issue).\n\n\nIf you are unsure about your answer, you can click on the tag and then “Save as a draft” to save if for later. In the case that you feel unequipped of rating a specific prompt, you can use the “Discard” button.\n\n\nRatings\n-------", "### 1. Very Bad:\n\n\nThe prompt doesn't communicate its purpose, is non-sensical or is in a language other than English.\n\n\nThe prompt assumes the usage of tools or capabilities that don’t apply to this model, like generating an image or scraping a website.\n\n\n*Examples:*\n\n\n\n> \n> \"Do the thing.\"\n> “Hello!”\n> \"asdajflajfada”\n> “Quiero que redactes una entrada de blog.”\n> \"Extract data from a website.”\n> “Tell me how you feel when someone insults you.”\n> \n> \n>", "### 2. Bad:\n\n\nSuggests a goal but lacks clarity and coherence.\n\n\n*Examples:*\n\n\n\n> \n> \"Find me stuff about that thing, you know?\"\n> “Write something.”\n> \"Tell me about this thing.\"\n> \"Can you help with this?\"\n> \"I need to know more.\"\n> \n> \n>", "### 3. Ok:\n\n\nThe intent is understandable, but it's missing information to complete the task.\n\n\n*Examples:*\n\n\n\n> \n> \"I need information on something important.\"\n> “Write a blogpost.”\n> \n> \n>", "### 4. Good:\n\n\nPresents a clear goal and necessary information, effectively directing the AI, but the prompt could be more specific.\n\n\n*Examples:*\n\n\n\n> \n> \"Provide a summary of renewable energy sources.\"\n> “Tell me about Sean Connery.”\n> \"Explain global warming.\"\n> \n> \n>", "### 5. Very Good:\n\n\nComprehensive and explicit, leaving no room for ambiguity. Perfectly guides the AI and includes details.\n\n\n*Examples:*\n\n\n\n> \n> \"Compare the efficiency and environmental impact of solar and wind energy, including recent advancements and case studies from 2023.\"\n> “Make a list of 5 plant-based recipes that I can try that don’t have red peppers as an ingredient.”\n> \n> \n>", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n", "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ prompt is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ quality is of type 'label\\_selection' with the following allowed values ['0', '1', '2', '3', '4'].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) quality-suggestion is of type 'label\\_selection' with the following allowed values ['0', '1', '2', '3', '4'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nTask\n====\n\n\nWe are collaboratively creating a database of prompts in English. Our aim is to identify effective prompts and understand the interaction between AI-generated and human-generated prompts. The focus is on functionality and precision.\n\n\nGuidelines\n----------\n\n\nYou need to assign a rating to each prompt thinking about the complexity for an assistant and if the intent is clear. A very good prompt is one that is challenging but also very clear in the intent of the user.\n\n\nYou can use keyboard shortcuts (the numbers) to quickly rate the examples.\n\n\nIf you find some pattern, you can also use the search box and filters as well as the bulk labelling mode, please use this with care and only when you find a clear pattern (e.g., prompts that are completely incorrect and share a common issue).\n\n\nIf you are unsure about your answer, you can click on the tag and then “Save as a draft” to save if for later. In the case that you feel unequipped of rating a specific prompt, you can use the “Discard” button.\n\n\nRatings\n-------", "### 1. Very Bad:\n\n\nThe prompt doesn't communicate its purpose, is non-sensical or is in a language other than English.\n\n\nThe prompt assumes the usage of tools or capabilities that don’t apply to this model, like generating an image or scraping a website.\n\n\n*Examples:*\n\n\n\n> \n> \"Do the thing.\"\n> “Hello!”\n> \"asdajflajfada”\n> “Quiero que redactes una entrada de blog.”\n> \"Extract data from a website.”\n> “Tell me how you feel when someone insults you.”\n> \n> \n>", "### 2. Bad:\n\n\nSuggests a goal but lacks clarity and coherence.\n\n\n*Examples:*\n\n\n\n> \n> \"Find me stuff about that thing, you know?\"\n> “Write something.”\n> \"Tell me about this thing.\"\n> \"Can you help with this?\"\n> \"I need to know more.\"\n> \n> \n>", "### 3. Ok:\n\n\nThe intent is understandable, but it's missing information to complete the task.\n\n\n*Examples:*\n\n\n\n> \n> \"I need information on something important.\"\n> “Write a blogpost.”\n> \n> \n>", "### 4. Good:\n\n\nPresents a clear goal and necessary information, effectively directing the AI, but the prompt could be more specific.\n\n\n*Examples:*\n\n\n\n> \n> \"Provide a summary of renewable energy sources.\"\n> “Tell me about Sean Connery.”\n> \"Explain global warming.\"\n> \n> \n>", "### 5. Very Good:\n\n\nComprehensive and explicit, leaving no room for ambiguity. Perfectly guides the AI and includes details.\n\n\n*Examples:*\n\n\n\n> \n> \"Compare the efficiency and environmental impact of solar and wind energy, including recent advancements and case studies from 2023.\"\n> “Make a list of 5 plant-based recipes that I can try that don’t have red peppers as an ingredient.”\n> \n> \n>", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
c8120097dd104b40f9396cb19bf7d0dcd3a4cbc5
### BioMed_general_NER This dataset consists of manually annotated biomedical abstracts from PubMed, drug descriptions from FDA and abstracts from patents. It was extracted 24 different entity types, including those specific to medicine and biology and general such as location and organization as well. This is one of the biggest datasets of such kind, which consists of 4840 annotated abstracts. ### Classes Here's a description for each of the labels: 1. **CHEMICALS** - Represents substances with distinct molecular composition, often involved in various biological or industrial processes. 2. **CLINICAL DRUG** - Refers to pharmaceutical substances developed for medical use, aimed at preventing, treating, or managing diseases. 3. **BODY SUBSTANCE** - Denotes materials or substances within the human body, including fluids, tissues, and other biological components. 4. **ANATOMICAL STRUCTURE** - Describes specific parts or structures within an organism's body, often related to anatomy and physiology. 5. **CELLS AND THEIR COMPONENTS** - Encompasses the basic structural and functional units of living organisms, along with their constituent elements. 6. **GENE AND GENE PRODUCTS** - Involves genetic information and the resultant products, such as proteins, that play a crucial role in biological processes. 7. **INTELLECTUAL PROPERTY** - Pertains to legal rights associated with creations of the mind, including inventions, literary and artistic works, and trademarks. 8. **LANGUAGE** - Relates to linguistic elements, including words, phrases, and language constructs, often in the context of communication or analysis. 9. **REGULATION OR LAW** - Represents rules, guidelines, or legal frameworks established by authorities to govern behavior, practices, or procedures. 10. **GEOGRAPHICAL AREAS** - Refers to specific regions, locations, or places on the Earth's surface, often associated with particular characteristics or significance. 11. **ORGANISM** - Denotes a living being, typically a plant, animal, or microorganism, as a distinct biological entity. 12. **GROUP** - Encompasses collections of individuals with shared characteristics, interests, or affiliations. 13. **PERSON** - Represents an individual human being, often considered as a distinct entity with personal attributes. 14. **ORGANIZATION** - Refers to structured entities, institutions, or companies formed for specific purposes or activities. 15. **PRODUCT** - Encompasses tangible or intangible items resulting from a process, often associated with manufacturing or creation. 16. **LOCATION** - Describes a specific place or position, whether physical or abstract, with potential relevance to various contexts. 17. **PHENOTYPE** - Represents the observable characteristics or traits of an organism, resulting from the interaction of its genotype with the environment. 18. **DISORDER** - Denotes abnormal conditions or disruptions in the normal functioning of a biological organism, often associated with diseases or medical conditions. 19. **SIGNALING MOLECULES** - Involves molecules that transmit signals within and between cells, playing a crucial role in various physiological processes. 20. **EVENT** - Describes occurrences or happenings at a specific time and place, often with significance or impact. 21. **MEDICAL PROCEDURE** - Involves specific actions or interventions conducted for medical purposes, such as surgeries, diagnostic tests, or therapeutic treatments. 22. **ACTIVITY** - Encompasses actions, behaviors, or processes undertaken by individuals, groups, or entities. 23. **FUNCTION** - Describes the purpose or role of a biological or mechanical entity, focusing on its intended or inherent activities. 24. **MONEY** - Represents currency or financial assets used as a medium of exchange, often in the context of economic transactions. ### Datasources * PubMed - biomedical articles abstracts; * FDA - drugs descriptions; * Patents - patents abstracts;
knowledgator/biomed_NER
[ "task_categories:token-classification", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "biomed NER", "PubMed NER", "biology", "medicine", "NER", "entity extraction", "region:us" ]
2024-02-09T10:52:45+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "pretty_name": "biomed-ner", "tags": ["biomed NER", "PubMed NER", "biology", "medicine", "NER", "entity extraction"]}
2024-02-12T17:35:16+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #biomed NER #PubMed NER #biology #medicine #NER #entity extraction #region-us
### BioMed_general_NER This dataset consists of manually annotated biomedical abstracts from PubMed, drug descriptions from FDA and abstracts from patents. It was extracted 24 different entity types, including those specific to medicine and biology and general such as location and organization as well. This is one of the biggest datasets of such kind, which consists of 4840 annotated abstracts. ### Classes Here's a description for each of the labels: 1. CHEMICALS - Represents substances with distinct molecular composition, often involved in various biological or industrial processes. 2. CLINICAL DRUG - Refers to pharmaceutical substances developed for medical use, aimed at preventing, treating, or managing diseases. 3. BODY SUBSTANCE - Denotes materials or substances within the human body, including fluids, tissues, and other biological components. 4. ANATOMICAL STRUCTURE - Describes specific parts or structures within an organism's body, often related to anatomy and physiology. 5. CELLS AND THEIR COMPONENTS - Encompasses the basic structural and functional units of living organisms, along with their constituent elements. 6. GENE AND GENE PRODUCTS - Involves genetic information and the resultant products, such as proteins, that play a crucial role in biological processes. 7. INTELLECTUAL PROPERTY - Pertains to legal rights associated with creations of the mind, including inventions, literary and artistic works, and trademarks. 8. LANGUAGE - Relates to linguistic elements, including words, phrases, and language constructs, often in the context of communication or analysis. 9. REGULATION OR LAW - Represents rules, guidelines, or legal frameworks established by authorities to govern behavior, practices, or procedures. 10. GEOGRAPHICAL AREAS - Refers to specific regions, locations, or places on the Earth's surface, often associated with particular characteristics or significance. 11. ORGANISM - Denotes a living being, typically a plant, animal, or microorganism, as a distinct biological entity. 12. GROUP - Encompasses collections of individuals with shared characteristics, interests, or affiliations. 13. PERSON - Represents an individual human being, often considered as a distinct entity with personal attributes. 14. ORGANIZATION - Refers to structured entities, institutions, or companies formed for specific purposes or activities. 15. PRODUCT - Encompasses tangible or intangible items resulting from a process, often associated with manufacturing or creation. 16. LOCATION - Describes a specific place or position, whether physical or abstract, with potential relevance to various contexts. 17. PHENOTYPE - Represents the observable characteristics or traits of an organism, resulting from the interaction of its genotype with the environment. 18. DISORDER - Denotes abnormal conditions or disruptions in the normal functioning of a biological organism, often associated with diseases or medical conditions. 19. SIGNALING MOLECULES - Involves molecules that transmit signals within and between cells, playing a crucial role in various physiological processes. 20. EVENT - Describes occurrences or happenings at a specific time and place, often with significance or impact. 21. MEDICAL PROCEDURE - Involves specific actions or interventions conducted for medical purposes, such as surgeries, diagnostic tests, or therapeutic treatments. 22. ACTIVITY - Encompasses actions, behaviors, or processes undertaken by individuals, groups, or entities. 23. FUNCTION - Describes the purpose or role of a biological or mechanical entity, focusing on its intended or inherent activities. 24. MONEY - Represents currency or financial assets used as a medium of exchange, often in the context of economic transactions. ### Datasources * PubMed - biomedical articles abstracts; * FDA - drugs descriptions; * Patents - patents abstracts;
[ "### BioMed_general_NER\n\nThis dataset consists of manually annotated biomedical abstracts from PubMed, drug descriptions from FDA and abstracts from patents. \n\nIt was extracted 24 different entity types, including those specific to medicine and biology and general such as location and organization as well. \n\nThis is one of the biggest datasets of such kind, which consists of 4840 annotated abstracts.", "### Classes\nHere's a description for each of the labels:\n\n1. CHEMICALS - Represents substances with distinct molecular composition, often involved in various biological or industrial processes.\n\n2. CLINICAL DRUG - Refers to pharmaceutical substances developed for medical use, aimed at preventing, treating, or managing diseases.\n\n3. BODY SUBSTANCE - Denotes materials or substances within the human body, including fluids, tissues, and other biological components.\n\n4. ANATOMICAL STRUCTURE - Describes specific parts or structures within an organism's body, often related to anatomy and physiology.\n\n5. CELLS AND THEIR COMPONENTS - Encompasses the basic structural and functional units of living organisms, along with their constituent elements.\n\n6. GENE AND GENE PRODUCTS - Involves genetic information and the resultant products, such as proteins, that play a crucial role in biological processes.\n\n7. INTELLECTUAL PROPERTY - Pertains to legal rights associated with creations of the mind, including inventions, literary and artistic works, and trademarks.\n\n8. LANGUAGE - Relates to linguistic elements, including words, phrases, and language constructs, often in the context of communication or analysis.\n\n9. REGULATION OR LAW - Represents rules, guidelines, or legal frameworks established by authorities to govern behavior, practices, or procedures.\n\n10. GEOGRAPHICAL AREAS - Refers to specific regions, locations, or places on the Earth's surface, often associated with particular characteristics or significance.\n\n11. ORGANISM - Denotes a living being, typically a plant, animal, or microorganism, as a distinct biological entity.\n\n12. GROUP - Encompasses collections of individuals with shared characteristics, interests, or affiliations.\n\n13. PERSON - Represents an individual human being, often considered as a distinct entity with personal attributes.\n\n14. ORGANIZATION - Refers to structured entities, institutions, or companies formed for specific purposes or activities.\n\n15. PRODUCT - Encompasses tangible or intangible items resulting from a process, often associated with manufacturing or creation.\n\n16. LOCATION - Describes a specific place or position, whether physical or abstract, with potential relevance to various contexts.\n\n17. PHENOTYPE - Represents the observable characteristics or traits of an organism, resulting from the interaction of its genotype with the environment.\n\n18. DISORDER - Denotes abnormal conditions or disruptions in the normal functioning of a biological organism, often associated with diseases or medical conditions.\n\n19. SIGNALING MOLECULES - Involves molecules that transmit signals within and between cells, playing a crucial role in various physiological processes.\n\n20. EVENT - Describes occurrences or happenings at a specific time and place, often with significance or impact.\n\n21. MEDICAL PROCEDURE - Involves specific actions or interventions conducted for medical purposes, such as surgeries, diagnostic tests, or therapeutic treatments.\n\n22. ACTIVITY - Encompasses actions, behaviors, or processes undertaken by individuals, groups, or entities.\n\n23. FUNCTION - Describes the purpose or role of a biological or mechanical entity, focusing on its intended or inherent activities.\n\n24. MONEY - Represents currency or financial assets used as a medium of exchange, often in the context of economic transactions.", "### Datasources\n* PubMed - biomedical articles abstracts;\n* FDA - drugs descriptions;\n* Patents - patents abstracts;" ]
[ "TAGS\n#task_categories-token-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #biomed NER #PubMed NER #biology #medicine #NER #entity extraction #region-us \n", "### BioMed_general_NER\n\nThis dataset consists of manually annotated biomedical abstracts from PubMed, drug descriptions from FDA and abstracts from patents. \n\nIt was extracted 24 different entity types, including those specific to medicine and biology and general such as location and organization as well. \n\nThis is one of the biggest datasets of such kind, which consists of 4840 annotated abstracts.", "### Classes\nHere's a description for each of the labels:\n\n1. CHEMICALS - Represents substances with distinct molecular composition, often involved in various biological or industrial processes.\n\n2. CLINICAL DRUG - Refers to pharmaceutical substances developed for medical use, aimed at preventing, treating, or managing diseases.\n\n3. BODY SUBSTANCE - Denotes materials or substances within the human body, including fluids, tissues, and other biological components.\n\n4. ANATOMICAL STRUCTURE - Describes specific parts or structures within an organism's body, often related to anatomy and physiology.\n\n5. CELLS AND THEIR COMPONENTS - Encompasses the basic structural and functional units of living organisms, along with their constituent elements.\n\n6. GENE AND GENE PRODUCTS - Involves genetic information and the resultant products, such as proteins, that play a crucial role in biological processes.\n\n7. INTELLECTUAL PROPERTY - Pertains to legal rights associated with creations of the mind, including inventions, literary and artistic works, and trademarks.\n\n8. LANGUAGE - Relates to linguistic elements, including words, phrases, and language constructs, often in the context of communication or analysis.\n\n9. REGULATION OR LAW - Represents rules, guidelines, or legal frameworks established by authorities to govern behavior, practices, or procedures.\n\n10. GEOGRAPHICAL AREAS - Refers to specific regions, locations, or places on the Earth's surface, often associated with particular characteristics or significance.\n\n11. ORGANISM - Denotes a living being, typically a plant, animal, or microorganism, as a distinct biological entity.\n\n12. GROUP - Encompasses collections of individuals with shared characteristics, interests, or affiliations.\n\n13. PERSON - Represents an individual human being, often considered as a distinct entity with personal attributes.\n\n14. ORGANIZATION - Refers to structured entities, institutions, or companies formed for specific purposes or activities.\n\n15. PRODUCT - Encompasses tangible or intangible items resulting from a process, often associated with manufacturing or creation.\n\n16. LOCATION - Describes a specific place or position, whether physical or abstract, with potential relevance to various contexts.\n\n17. PHENOTYPE - Represents the observable characteristics or traits of an organism, resulting from the interaction of its genotype with the environment.\n\n18. DISORDER - Denotes abnormal conditions or disruptions in the normal functioning of a biological organism, often associated with diseases or medical conditions.\n\n19. SIGNALING MOLECULES - Involves molecules that transmit signals within and between cells, playing a crucial role in various physiological processes.\n\n20. EVENT - Describes occurrences or happenings at a specific time and place, often with significance or impact.\n\n21. MEDICAL PROCEDURE - Involves specific actions or interventions conducted for medical purposes, such as surgeries, diagnostic tests, or therapeutic treatments.\n\n22. ACTIVITY - Encompasses actions, behaviors, or processes undertaken by individuals, groups, or entities.\n\n23. FUNCTION - Describes the purpose or role of a biological or mechanical entity, focusing on its intended or inherent activities.\n\n24. MONEY - Represents currency or financial assets used as a medium of exchange, often in the context of economic transactions.", "### Datasources\n* PubMed - biomedical articles abstracts;\n* FDA - drugs descriptions;\n* Patents - patents abstracts;" ]
d6cdb44bbcc3c8e31b464d76567f1fc10591cbf5
# Meta Math Filtered This is a combined and filtered (removed all the redundant rows) version of [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) and [meta-math/MetaMathQA-40K](https://huggingface.co/datasets/meta-math/MetaMathQA-40K) ## Usage ```python from datasets import load_dataset dataset = load_dataset("Sharathhebbar24/MetaMathQA", split="train") ```
Sharathhebbar24/MetaMathQA
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "math", "region:us" ]
2024-02-09T11:12:56+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 314797881, "num_examples": 390062}], "download_size": 131558400, "dataset_size": 314797881}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["math"]}
2024-02-09T12:12:23+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #math #region-us
# Meta Math Filtered This is a combined and filtered (removed all the redundant rows) version of meta-math/MetaMathQA and meta-math/MetaMathQA-40K ## Usage
[ "# Meta Math Filtered\n\nThis is a combined and filtered (removed all the redundant rows) version of meta-math/MetaMathQA and meta-math/MetaMathQA-40K", "## Usage" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #math #region-us \n", "# Meta Math Filtered\n\nThis is a combined and filtered (removed all the redundant rows) version of meta-math/MetaMathQA and meta-math/MetaMathQA-40K", "## Usage" ]
e2e2a86588b2cb4af7c9486631f03f274cf118c6
# Dataset Card for "UltrachatRheumatology" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cmcmaster/UltrachatRheumatology
[ "region:us" ]
2024-02-09T11:26:44+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "text", "dtype": "string"}, {"name": "rheumatology_terms", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 21726399.91286039, "num_examples": 2008}], "download_size": 14157810, "dataset_size": 21726399.91286039}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-09T11:26:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "UltrachatRheumatology" More Information needed
[ "# Dataset Card for \"UltrachatRheumatology\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"UltrachatRheumatology\"\n\nMore Information needed" ]
0b896e1e3c442b8b78b9a466e15865db7e98a3d2
## Description A channel generating latent splatting scenes (of single objects for now) ## Model SVD ## Gaussian Model LGM ## Voice Muted # Tags - Latent Splatting # Style close-up, single, pro photoshoot, neutral background, empty room ## Prompt A channel generating animated videos of various people, animal and items (eg. robots) in an empty, bare room. Those videos will usually just perform simple actions, like dancing, talking etc,
jbilcke-hf/ai-tube-latent-splats
[ "license:cc-by-nc-4.0", "region:us" ]
2024-02-09T11:30:04+00:00
{"license": "cc-by-nc-4.0", "pretty_name": "Latent Splats"}
2024-02-09T11:35:29+00:00
[]
[]
TAGS #license-cc-by-nc-4.0 #region-us
## Description A channel generating latent splatting scenes (of single objects for now) ## Model SVD ## Gaussian Model LGM ## Voice Muted # Tags - Latent Splatting # Style close-up, single, pro photoshoot, neutral background, empty room ## Prompt A channel generating animated videos of various people, animal and items (eg. robots) in an empty, bare room. Those videos will usually just perform simple actions, like dancing, talking etc,
[ "## Description\n\nA channel generating latent splatting scenes (of single objects for now)", "## Model\n\nSVD", "## Gaussian Model\n\nLGM", "## Voice\n\nMuted", "# Tags\n\n- Latent Splatting", "# Style\n\nclose-up, single, pro photoshoot, neutral background, empty room", "## Prompt\n\nA channel generating animated videos of various people, animal and items (eg. robots) in an empty, bare room.\nThose videos will usually just perform simple actions, like dancing, talking etc," ]
[ "TAGS\n#license-cc-by-nc-4.0 #region-us \n", "## Description\n\nA channel generating latent splatting scenes (of single objects for now)", "## Model\n\nSVD", "## Gaussian Model\n\nLGM", "## Voice\n\nMuted", "# Tags\n\n- Latent Splatting", "# Style\n\nclose-up, single, pro photoshoot, neutral background, empty room", "## Prompt\n\nA channel generating animated videos of various people, animal and items (eg. robots) in an empty, bare room.\nThose videos will usually just perform simple actions, like dancing, talking etc," ]
f07e8361519bc5eab1955ef5d3a016d7a0f66da7
# Dataset Card for Evaluation run of paulml/NeuralOmniBeagleMBX-v3-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [paulml/NeuralOmniBeagleMBX-v3-7B](https://huggingface.co/paulml/NeuralOmniBeagleMBX-v3-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_paulml__NeuralOmniBeagleMBX-v3-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T12:11:50.137118](https://huggingface.co/datasets/open-llm-leaderboard/details_paulml__NeuralOmniBeagleMBX-v3-7B/blob/main/results_2024-02-09T12-11-50.137118.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6560628824238156, "acc_stderr": 0.032085023933652076, "acc_norm": 0.6554003887109401, "acc_norm_stderr": 0.032757489090542165, "mc1": 0.5813953488372093, "mc1_stderr": 0.01727001528447687, "mc2": 0.731035589148516, "mc2_stderr": 0.014493252212944781 }, "harness|arc:challenge|25": { "acc": 0.7081911262798635, "acc_stderr": 0.013284525292403511, "acc_norm": 0.7337883959044369, "acc_norm_stderr": 0.012915774781523195 }, "harness|hellaswag|10": { "acc": 0.7141007767377017, "acc_stderr": 0.004509181919322848, "acc_norm": 0.8890659231228839, "acc_norm_stderr": 0.0031340865499526853 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6666666666666666, "acc_stderr": 0.04072314811876837, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.04072314811876837 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6776315789473685, "acc_stderr": 0.03803510248351585, "acc_norm": 0.6776315789473685, "acc_norm_stderr": 0.03803510248351585 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.64, "acc_stderr": 0.04824181513244218, "acc_norm": 0.64, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7094339622641509, "acc_stderr": 0.027943219989337135, "acc_norm": 0.7094339622641509, "acc_norm_stderr": 0.027943219989337135 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7986111111111112, "acc_stderr": 0.03353647469713839, "acc_norm": 0.7986111111111112, "acc_norm_stderr": 0.03353647469713839 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.56, "acc_stderr": 0.049888765156985884, "acc_norm": 0.56, "acc_norm_stderr": 0.049888765156985884 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6763005780346821, "acc_stderr": 0.0356760379963917, "acc_norm": 0.6763005780346821, "acc_norm_stderr": 0.0356760379963917 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4117647058823529, "acc_stderr": 0.048971049527263666, "acc_norm": 0.4117647058823529, "acc_norm_stderr": 0.048971049527263666 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5702127659574469, "acc_stderr": 0.03236214467715564, "acc_norm": 0.5702127659574469, "acc_norm_stderr": 0.03236214467715564 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.45614035087719296, "acc_stderr": 0.046854730419077895, "acc_norm": 0.45614035087719296, "acc_norm_stderr": 0.046854730419077895 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5586206896551724, "acc_stderr": 0.04137931034482757, "acc_norm": 0.5586206896551724, "acc_norm_stderr": 0.04137931034482757 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4126984126984127, "acc_stderr": 0.02535574126305527, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.02535574126305527 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5, "acc_stderr": 0.04472135954999579, "acc_norm": 0.5, "acc_norm_stderr": 0.04472135954999579 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7903225806451613, "acc_stderr": 0.023157879349083525, "acc_norm": 0.7903225806451613, "acc_norm_stderr": 0.023157879349083525 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7575757575757576, "acc_stderr": 0.03346409881055953, "acc_norm": 0.7575757575757576, "acc_norm_stderr": 0.03346409881055953 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.803030303030303, "acc_stderr": 0.028335609732463362, "acc_norm": 0.803030303030303, "acc_norm_stderr": 0.028335609732463362 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.021995311963644237, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.021995311963644237 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6820512820512821, "acc_stderr": 0.023610884308927865, "acc_norm": 0.6820512820512821, "acc_norm_stderr": 0.023610884308927865 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34444444444444444, "acc_stderr": 0.028972648884844267, "acc_norm": 0.34444444444444444, "acc_norm_stderr": 0.028972648884844267 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.680672268907563, "acc_stderr": 0.030283995525884396, "acc_norm": 0.680672268907563, "acc_norm_stderr": 0.030283995525884396 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8440366972477065, "acc_stderr": 0.01555580271359017, "acc_norm": 0.8440366972477065, "acc_norm_stderr": 0.01555580271359017 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5601851851851852, "acc_stderr": 0.0338517797604481, "acc_norm": 0.5601851851851852, "acc_norm_stderr": 0.0338517797604481 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8431372549019608, "acc_stderr": 0.02552472232455335, "acc_norm": 0.8431372549019608, "acc_norm_stderr": 0.02552472232455335 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.810126582278481, "acc_stderr": 0.025530100460233494, "acc_norm": 0.810126582278481, "acc_norm_stderr": 0.025530100460233494 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7862595419847328, "acc_stderr": 0.0359546161177469, "acc_norm": 0.7862595419847328, "acc_norm_stderr": 0.0359546161177469 }, "harness|hendrycksTest-international_law|5": { "acc": 0.768595041322314, "acc_stderr": 0.03849856098794088, "acc_norm": 0.768595041322314, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.42857142857142855, "acc_stderr": 0.04697113923010212, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.04697113923010212 }, "harness|hendrycksTest-management|5": { "acc": 0.7669902912621359, "acc_stderr": 0.04185832598928315, "acc_norm": 0.7669902912621359, "acc_norm_stderr": 0.04185832598928315 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8760683760683761, "acc_stderr": 0.021586494001281365, "acc_norm": 0.8760683760683761, "acc_norm_stderr": 0.021586494001281365 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8275862068965517, "acc_stderr": 0.013507943909371802, "acc_norm": 0.8275862068965517, "acc_norm_stderr": 0.013507943909371802 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7283236994219653, "acc_stderr": 0.023948512905468348, "acc_norm": 0.7283236994219653, "acc_norm_stderr": 0.023948512905468348 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.43687150837988825, "acc_stderr": 0.016588680864530626, "acc_norm": 0.43687150837988825, "acc_norm_stderr": 0.016588680864530626 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7058823529411765, "acc_stderr": 0.026090162504279056, "acc_norm": 0.7058823529411765, "acc_norm_stderr": 0.026090162504279056 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7170418006430869, "acc_stderr": 0.025583062489984813, "acc_norm": 0.7170418006430869, "acc_norm_stderr": 0.025583062489984813 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7469135802469136, "acc_stderr": 0.024191808600712995, "acc_norm": 0.7469135802469136, "acc_norm_stderr": 0.024191808600712995 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4858156028368794, "acc_stderr": 0.02981549448368206, "acc_norm": 0.4858156028368794, "acc_norm_stderr": 0.02981549448368206 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4706649282920469, "acc_stderr": 0.012748238397365549, "acc_norm": 0.4706649282920469, "acc_norm_stderr": 0.012748238397365549 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6727941176470589, "acc_stderr": 0.02850145286039655, "acc_norm": 0.6727941176470589, "acc_norm_stderr": 0.02850145286039655 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6879084967320261, "acc_stderr": 0.018745011201277657, "acc_norm": 0.6879084967320261, "acc_norm_stderr": 0.018745011201277657 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6727272727272727, "acc_stderr": 0.0449429086625209, "acc_norm": 0.6727272727272727, "acc_norm_stderr": 0.0449429086625209 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.726530612244898, "acc_stderr": 0.028535560337128448, "acc_norm": 0.726530612244898, "acc_norm_stderr": 0.028535560337128448 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8308457711442786, "acc_stderr": 0.026508590656233264, "acc_norm": 0.8308457711442786, "acc_norm_stderr": 0.026508590656233264 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.034873508801977704, "acc_norm": 0.86, "acc_norm_stderr": 0.034873508801977704 }, "harness|hendrycksTest-virology|5": { "acc": 0.572289156626506, "acc_stderr": 0.038515976837185335, "acc_norm": 0.572289156626506, "acc_norm_stderr": 0.038515976837185335 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8421052631578947, "acc_stderr": 0.027966785859160893, "acc_norm": 0.8421052631578947, "acc_norm_stderr": 0.027966785859160893 }, "harness|truthfulqa:mc|0": { "mc1": 0.5813953488372093, "mc1_stderr": 0.01727001528447687, "mc2": 0.731035589148516, "mc2_stderr": 0.014493252212944781 }, "harness|winogrande|5": { "acc": 0.8421468034727704, "acc_stderr": 0.010247165248719763 }, "harness|gsm8k|5": { "acc": 0.709628506444276, "acc_stderr": 0.012503592481818948 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_paulml__NeuralOmniBeagleMBX-v3-7B
[ "region:us" ]
2024-02-09T12:14:12+00:00
{"pretty_name": "Evaluation run of paulml/NeuralOmniBeagleMBX-v3-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [paulml/NeuralOmniBeagleMBX-v3-7B](https://huggingface.co/paulml/NeuralOmniBeagleMBX-v3-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_paulml__NeuralOmniBeagleMBX-v3-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T12:11:50.137118](https://huggingface.co/datasets/open-llm-leaderboard/details_paulml__NeuralOmniBeagleMBX-v3-7B/blob/main/results_2024-02-09T12-11-50.137118.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6560628824238156,\n \"acc_stderr\": 0.032085023933652076,\n \"acc_norm\": 0.6554003887109401,\n \"acc_norm_stderr\": 0.032757489090542165,\n \"mc1\": 0.5813953488372093,\n \"mc1_stderr\": 0.01727001528447687,\n \"mc2\": 0.731035589148516,\n \"mc2_stderr\": 0.014493252212944781\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.7081911262798635,\n \"acc_stderr\": 0.013284525292403511,\n \"acc_norm\": 0.7337883959044369,\n \"acc_norm_stderr\": 0.012915774781523195\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7141007767377017,\n \"acc_stderr\": 0.004509181919322848,\n \"acc_norm\": 0.8890659231228839,\n \"acc_norm_stderr\": 0.0031340865499526853\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337135,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337135\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7986111111111112,\n \"acc_stderr\": 0.03353647469713839,\n \"acc_norm\": 0.7986111111111112,\n \"acc_norm_stderr\": 0.03353647469713839\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n \"acc_stderr\": 0.046854730419077895,\n \"acc_norm\": 0.45614035087719296,\n \"acc_norm_stderr\": 0.046854730419077895\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.02535574126305527,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.02535574126305527\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n \"acc_stderr\": 0.023157879349083525,\n \"acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.023157879349083525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644237,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644237\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6820512820512821,\n \"acc_stderr\": 0.023610884308927865,\n \"acc_norm\": 0.6820512820512821,\n \"acc_norm_stderr\": 0.023610884308927865\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34444444444444444,\n \"acc_stderr\": 0.028972648884844267,\n \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.028972648884844267\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8440366972477065,\n \"acc_stderr\": 0.01555580271359017,\n \"acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.01555580271359017\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5601851851851852,\n \"acc_stderr\": 0.0338517797604481,\n \"acc_norm\": 0.5601851851851852,\n \"acc_norm_stderr\": 0.0338517797604481\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.02552472232455335,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.02552472232455335\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233494,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233494\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281365,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281365\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8275862068965517,\n \"acc_stderr\": 0.013507943909371802,\n \"acc_norm\": 0.8275862068965517,\n \"acc_norm_stderr\": 0.013507943909371802\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468348,\n \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468348\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43687150837988825,\n \"acc_stderr\": 0.016588680864530626,\n \"acc_norm\": 0.43687150837988825,\n \"acc_norm_stderr\": 0.016588680864530626\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.026090162504279056,\n \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.026090162504279056\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.025583062489984813,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.025583062489984813\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7469135802469136,\n \"acc_stderr\": 0.024191808600712995,\n \"acc_norm\": 0.7469135802469136,\n \"acc_norm_stderr\": 0.024191808600712995\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4706649282920469,\n \"acc_stderr\": 0.012748238397365549,\n \"acc_norm\": 0.4706649282920469,\n \"acc_norm_stderr\": 0.012748238397365549\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.02850145286039655,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.02850145286039655\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6879084967320261,\n \"acc_stderr\": 0.018745011201277657,\n \"acc_norm\": 0.6879084967320261,\n \"acc_norm_stderr\": 0.018745011201277657\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.026508590656233264,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.026508590656233264\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977704,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977704\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5813953488372093,\n \"mc1_stderr\": 0.01727001528447687,\n \"mc2\": 0.731035589148516,\n \"mc2_stderr\": 0.014493252212944781\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8421468034727704,\n \"acc_stderr\": 0.010247165248719763\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.709628506444276,\n \"acc_stderr\": 0.012503592481818948\n }\n}\n```", "repo_url": "https://huggingface.co/paulml/NeuralOmniBeagleMBX-v3-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-11-50.137118.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["**/details_harness|winogrande|5_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T12-11-50.137118.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T12_11_50.137118", "path": ["results_2024-02-09T12-11-50.137118.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T12-11-50.137118.parquet"]}]}]}
2024-02-09T12:14:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of paulml/NeuralOmniBeagleMBX-v3-7B Dataset automatically created during the evaluation run of model paulml/NeuralOmniBeagleMBX-v3-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T12:11:50.137118(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of paulml/NeuralOmniBeagleMBX-v3-7B\n\n\n\nDataset automatically created during the evaluation run of model paulml/NeuralOmniBeagleMBX-v3-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:11:50.137118(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of paulml/NeuralOmniBeagleMBX-v3-7B\n\n\n\nDataset automatically created during the evaluation run of model paulml/NeuralOmniBeagleMBX-v3-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:11:50.137118(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
7b786def7db86f171a7ab57111758a5a6da0437a
# JMMLU Japanese Massive Multitask Language Understanding Benchmark JMMLU is a four-choice question set consisting of Japanese-translated questions of a portion of MMLU ([Paper](https://arxiv.org/abs/2009.03300), [Github](https://github.com/hendrycks/test)) (Translated questions) and questions based on unique Japanese cultural context (Japanese questions). It is designed to assess the performance of large language models in Japanese. For the translated questions, a maximum of 150 questions from each of the 57 MMLU tasks (subjects) were selected and first machine-translated into Japanese. Next, the translators checked the machine translations and removed questions and tasks that were difficult to translate, irrelevant, or inconsistent with the Japanese culture. The remaining questions were modified to make them fluent. The Japanese questions are based on school subjects, such as Japanese civics and history, and are manually created by Japanese teachers. The format is the same as MMLU: ``` Question, Choice A, Choice B, Choice C, Choice D, Answer ``` [Github](https://github.com/nlp-waseda/JMMLU) The JMMLU consists of 7,536 questions in the following 56 tasks (subjects). | Japanese Task Name | English Task Name | Number | |---|---|---:| | 専門医学 | professional_medicine | 150 | | 専門心理学 | professional_psychology | 150 | | 専門会計 | professional_accounting | 150 | | 哲学 | philosophy | 150 | | 雑学 | miscellaneous | 150 | | 医学遺伝学 | medical_genetics | 99 | | 形式論理 | formal_logic | 125 | | 先史学 | prehistory | 150 | | 天文学 | astronomy | 148 | | 熟語 | japanese_idiom | 150 | | 世界宗教 | world_religions | 147 | | 世界事実 | global_facts | 97 | | 世界史 | world_history | 150 | | 社会学 | sociology | 150 | | 栄養学 | nutrition | 149 | | 日本史 | japanese_history | 150 | | 日本地理 | japanese_geography | 139 | | 人間の老化 | human_aging | 150 | | 論理学 | logical_fallacies | 150 | | 倫理的議論 | moral_disputes | 148 | | 臨床知識 | clinical_knowledge | 150 | | 経営学 | management | 102 | | 解剖学 | anatomy | 132 | | 計量経済学 | econometrics | 113 | | 機械学習 | machine_learning | 111 | | 国際法 | international_law | 120 | | 公民 | japanese_civics | 150 | | 公共関係 | public_relations | 109 | | 高校心理学 | high_school_psychology | 150 | | 高校物理 | high_school_physics | 150 | | 高校統計学 | high_school_statistics | 150 | | 高校数学 | high_school_mathematics | 150 | | 高校生物学 | high_school_biology | 148 | | 高校情報科学 | high_school_computer_science | 98 | | 高校化学 | high_school_chemistry | 149 | | 高校地理 | high_school_geography | 150 | | 高校ヨーロッパ史 | high_school_european_history | 150 | | 高校ミクロ経済学 | high_school_microeconomics | 149 | | 高校マクロ経済学 | high_school_macroeconomics | 148 | | 概念物理学 | conceptual_physics | 150 | | 法理学 | jurisprudence | 107 | | 電気工学 | electrical_engineering | 144 | | 大学医学 | college_medicine | 150 | | 大学物理 | college_physics | 100 | | 大学数学 | college_mathematics | 99 | | 大学生物学 | college_biology | 143 | | 大学化学 | college_chemistry | 99 | | 大学コンピュータ科学 | college_computer_science | 99 | | 初等数学 | elementary_mathematics | 150 | | 抽象代数 | abstract_algebra | 99 | | マーケティング | marketing | 150 | | ビジネス倫理 | business_ethics | 86 | | セクシュアリティ | human_sexuality | 130 | | セキュリティ研究 | security_studies | 150 | | コンピュータセキュリティ | computer_security | 99 | | ウイルス学 | virology | 150 | The copyrights for Japanese and World History belongs to STEP Corporation. Commercial use other than for research and evaluation of language models is prohibited. The copyrights for Japanese idioms, Japansese civics, and Japanese geography belong to New Style Cram School VIST. Commercial use is allowed only for research and evaluation of language models. This work is licensed under CC BY-NC-ND 4.0 # Acknowledgment We express our gratitude to the RIKEN for their support in the translation of MMLU. We also acknowledge the contributions from Step Corporation, who provided materials on Japanese and World History, and from New Style Cram School VIST, who supplied resources on japanese_idioms, japansese_civics, and japanese_geography.
nlp-waseda/JMMLU
[ "arxiv:2009.03300", "region:us" ]
2024-02-09T12:19:13+00:00
{}
2024-02-10T07:11:21+00:00
[ "2009.03300" ]
[]
TAGS #arxiv-2009.03300 #region-us
JMMLU ===== Japanese Massive Multitask Language Understanding Benchmark JMMLU is a four-choice question set consisting of Japanese-translated questions of a portion of MMLU (Paper, Github) (Translated questions) and questions based on unique Japanese cultural context (Japanese questions). It is designed to assess the performance of large language models in Japanese. For the translated questions, a maximum of 150 questions from each of the 57 MMLU tasks (subjects) were selected and first machine-translated into Japanese. Next, the translators checked the machine translations and removed questions and tasks that were difficult to translate, irrelevant, or inconsistent with the Japanese culture. The remaining questions were modified to make them fluent. The Japanese questions are based on school subjects, such as Japanese civics and history, and are manually created by Japanese teachers. The format is the same as MMLU: Github The JMMLU consists of 7,536 questions in the following 56 tasks (subjects). The copyrights for Japanese and World History belongs to STEP Corporation. Commercial use other than for research and evaluation of language models is prohibited. The copyrights for Japanese idioms, Japansese civics, and Japanese geography belong to New Style Cram School VIST. Commercial use is allowed only for research and evaluation of language models. This work is licensed under CC BY-NC-ND 4.0 Acknowledgment ============== We express our gratitude to the RIKEN for their support in the translation of MMLU. We also acknowledge the contributions from Step Corporation, who provided materials on Japanese and World History, and from New Style Cram School VIST, who supplied resources on japanese\_idioms, japansese\_civics, and japanese\_geography.
[]
[ "TAGS\n#arxiv-2009.03300 #region-us \n" ]
ae57258299132e3bd0885e2fe461e1645999de63
# ⛵ Midjourney Images Dataset This is datase with images made by Midjourney V5/V6. ## Dataset parameters 1. **Count of images**: ~10.000 2. **Zip file with dataset**: True 3. **Captions with images**: False ## License License for this dataset: [MIT](https://www.mit.edu/~amini/LICENSE.md) ## Use in *datasets* 1. ```bash pip install -q datasets ``` 2. ```py from datasets import load_dataset dataset = load_dataset( "ehristoforu/midjourney-images", revision="main" ) ``` #### *Enjoy with this dataset!*
ehristoforu/midjourney-images
[ "task_categories:text-to-image", "task_categories:image-to-image", "size_categories:10K<n<100K", "license:mit", "midjourney-v6", "midjourney", "midjourney-images", "images", "croissant", "region:us" ]
2024-02-09T12:20:13+00:00
{"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-image", "image-to-image"], "tags": ["midjourney-v6", "midjourney", "midjourney-images", "images", "croissant"]}
2024-02-11T14:02:57+00:00
[]
[]
TAGS #task_categories-text-to-image #task_categories-image-to-image #size_categories-10K<n<100K #license-mit #midjourney-v6 #midjourney #midjourney-images #images #croissant #region-us
# Midjourney Images Dataset This is datase with images made by Midjourney V5/V6. ## Dataset parameters 1. Count of images: ~10.000 2. Zip file with dataset: True 3. Captions with images: False ## License License for this dataset: MIT ## Use in *datasets* 1. 2. #### *Enjoy with this dataset!*
[ "# Midjourney Images Dataset\n\nThis is datase with images made by Midjourney V5/V6.", "## Dataset parameters\n1. Count of images: ~10.000\n2. Zip file with dataset: True\n3. Captions with images: False", "## License\n\nLicense for this dataset: MIT", "## Use in *datasets*\n\n1. \n2.", "#### *Enjoy with this dataset!*" ]
[ "TAGS\n#task_categories-text-to-image #task_categories-image-to-image #size_categories-10K<n<100K #license-mit #midjourney-v6 #midjourney #midjourney-images #images #croissant #region-us \n", "# Midjourney Images Dataset\n\nThis is datase with images made by Midjourney V5/V6.", "## Dataset parameters\n1. Count of images: ~10.000\n2. Zip file with dataset: True\n3. Captions with images: False", "## License\n\nLicense for this dataset: MIT", "## Use in *datasets*\n\n1. \n2.", "#### *Enjoy with this dataset!*" ]
80efce5f1665a7a3dd60a487a6cac5faefdce0b8
# Dataset Card for Evaluation run of jondurbin/bagel-7b-v0.4 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [jondurbin/bagel-7b-v0.4](https://huggingface.co/jondurbin/bagel-7b-v0.4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_jondurbin__bagel-7b-v0.4", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T12:18:51.743149](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__bagel-7b-v0.4/blob/main/results_2024-02-09T12-18-51.743149.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6224226507357447, "acc_stderr": 0.03300491139206905, "acc_norm": 0.6261475680953128, "acc_norm_stderr": 0.03367429602929055, "mc1": 0.3769889840881273, "mc1_stderr": 0.016965517578930354, "mc2": 0.5420385268751854, "mc2_stderr": 0.015218334200579092 }, "harness|arc:challenge|25": { "acc": 0.6015358361774744, "acc_stderr": 0.014306946052735567, "acc_norm": 0.6356655290102389, "acc_norm_stderr": 0.014063260279882419 }, "harness|hellaswag|10": { "acc": 0.6235809599681338, "acc_stderr": 0.004834969412883641, "acc_norm": 0.826727743477395, "acc_norm_stderr": 0.0037770896070954763 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.04725815626252606, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252606 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5925925925925926, "acc_stderr": 0.04244633238353227, "acc_norm": 0.5925925925925926, "acc_norm_stderr": 0.04244633238353227 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6578947368421053, "acc_stderr": 0.03860731599316091, "acc_norm": 0.6578947368421053, "acc_norm_stderr": 0.03860731599316091 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7094339622641509, "acc_stderr": 0.027943219989337135, "acc_norm": 0.7094339622641509, "acc_norm_stderr": 0.027943219989337135 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6597222222222222, "acc_stderr": 0.039621355734862175, "acc_norm": 0.6597222222222222, "acc_norm_stderr": 0.039621355734862175 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6069364161849711, "acc_stderr": 0.03724249595817731, "acc_norm": 0.6069364161849711, "acc_norm_stderr": 0.03724249595817731 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.46078431372549017, "acc_stderr": 0.04959859966384181, "acc_norm": 0.46078431372549017, "acc_norm_stderr": 0.04959859966384181 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5446808510638298, "acc_stderr": 0.03255525359340354, "acc_norm": 0.5446808510638298, "acc_norm_stderr": 0.03255525359340354 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4649122807017544, "acc_stderr": 0.04692008381368909, "acc_norm": 0.4649122807017544, "acc_norm_stderr": 0.04692008381368909 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5724137931034483, "acc_stderr": 0.041227371113703316, "acc_norm": 0.5724137931034483, "acc_norm_stderr": 0.041227371113703316 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.40476190476190477, "acc_stderr": 0.025279850397404904, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.025279850397404904 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.044444444444444495, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.044444444444444495 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7645161290322581, "acc_stderr": 0.02413763242933771, "acc_norm": 0.7645161290322581, "acc_norm_stderr": 0.02413763242933771 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145633, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7515151515151515, "acc_stderr": 0.033744026441394036, "acc_norm": 0.7515151515151515, "acc_norm_stderr": 0.033744026441394036 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945633, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945633 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8497409326424871, "acc_stderr": 0.02578772318072387, "acc_norm": 0.8497409326424871, "acc_norm_stderr": 0.02578772318072387 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6153846153846154, "acc_stderr": 0.024666744915187208, "acc_norm": 0.6153846153846154, "acc_norm_stderr": 0.024666744915187208 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3925925925925926, "acc_stderr": 0.02977384701253297, "acc_norm": 0.3925925925925926, "acc_norm_stderr": 0.02977384701253297 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6470588235294118, "acc_stderr": 0.031041941304059278, "acc_norm": 0.6470588235294118, "acc_norm_stderr": 0.031041941304059278 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3973509933774834, "acc_stderr": 0.039955240076816806, "acc_norm": 0.3973509933774834, "acc_norm_stderr": 0.039955240076816806 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8256880733944955, "acc_stderr": 0.016265675632010344, "acc_norm": 0.8256880733944955, "acc_norm_stderr": 0.016265675632010344 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5509259259259259, "acc_stderr": 0.03392238405321617, "acc_norm": 0.5509259259259259, "acc_norm_stderr": 0.03392238405321617 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7745098039215687, "acc_stderr": 0.02933116229425174, "acc_norm": 0.7745098039215687, "acc_norm_stderr": 0.02933116229425174 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7763713080168776, "acc_stderr": 0.027123298205229966, "acc_norm": 0.7763713080168776, "acc_norm_stderr": 0.027123298205229966 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6502242152466368, "acc_stderr": 0.03200736719484503, "acc_norm": 0.6502242152466368, "acc_norm_stderr": 0.03200736719484503 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7404580152671756, "acc_stderr": 0.03844876139785271, "acc_norm": 0.7404580152671756, "acc_norm_stderr": 0.03844876139785271 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7355371900826446, "acc_stderr": 0.04026187527591205, "acc_norm": 0.7355371900826446, "acc_norm_stderr": 0.04026187527591205 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7685185185185185, "acc_stderr": 0.04077494709252626, "acc_norm": 0.7685185185185185, "acc_norm_stderr": 0.04077494709252626 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7300613496932515, "acc_stderr": 0.03487825168497892, "acc_norm": 0.7300613496932515, "acc_norm_stderr": 0.03487825168497892 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.04058042015646034, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.04058042015646034 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8846153846153846, "acc_stderr": 0.020930193185179337, "acc_norm": 0.8846153846153846, "acc_norm_stderr": 0.020930193185179337 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.67, "acc_stderr": 0.04725815626252607, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252607 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8135376756066411, "acc_stderr": 0.013927751372001501, "acc_norm": 0.8135376756066411, "acc_norm_stderr": 0.013927751372001501 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.638728323699422, "acc_stderr": 0.02586220185227789, "acc_norm": 0.638728323699422, "acc_norm_stderr": 0.02586220185227789 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2927374301675978, "acc_stderr": 0.015218109544410184, "acc_norm": 0.2927374301675978, "acc_norm_stderr": 0.015218109544410184 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7058823529411765, "acc_stderr": 0.02609016250427906, "acc_norm": 0.7058823529411765, "acc_norm_stderr": 0.02609016250427906 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6688102893890675, "acc_stderr": 0.026730620728004906, "acc_norm": 0.6688102893890675, "acc_norm_stderr": 0.026730620728004906 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6975308641975309, "acc_stderr": 0.025557653981868055, "acc_norm": 0.6975308641975309, "acc_norm_stderr": 0.025557653981868055 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.44680851063829785, "acc_stderr": 0.029658235097666907, "acc_norm": 0.44680851063829785, "acc_norm_stderr": 0.029658235097666907 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4276401564537158, "acc_stderr": 0.012635799922765846, "acc_norm": 0.4276401564537158, "acc_norm_stderr": 0.012635799922765846 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6617647058823529, "acc_stderr": 0.028739328513983572, "acc_norm": 0.6617647058823529, "acc_norm_stderr": 0.028739328513983572 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6421568627450981, "acc_stderr": 0.019393058402355442, "acc_norm": 0.6421568627450981, "acc_norm_stderr": 0.019393058402355442 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6938775510204082, "acc_stderr": 0.02950489645459596, "acc_norm": 0.6938775510204082, "acc_norm_stderr": 0.02950489645459596 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8258706467661692, "acc_stderr": 0.026814951200421603, "acc_norm": 0.8258706467661692, "acc_norm_stderr": 0.026814951200421603 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.04020151261036845, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-virology|5": { "acc": 0.5060240963855421, "acc_stderr": 0.03892212195333045, "acc_norm": 0.5060240963855421, "acc_norm_stderr": 0.03892212195333045 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640044, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640044 }, "harness|truthfulqa:mc|0": { "mc1": 0.3769889840881273, "mc1_stderr": 0.016965517578930354, "mc2": 0.5420385268751854, "mc2_stderr": 0.015218334200579092 }, "harness|winogrande|5": { "acc": 0.7892659826361483, "acc_stderr": 0.011462046419710686 }, "harness|gsm8k|5": { "acc": 0.47308567096285065, "acc_stderr": 0.013752517189717465 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_jondurbin__bagel-7b-v0.4
[ "region:us" ]
2024-02-09T12:21:09+00:00
{"pretty_name": "Evaluation run of jondurbin/bagel-7b-v0.4", "dataset_summary": "Dataset automatically created during the evaluation run of model [jondurbin/bagel-7b-v0.4](https://huggingface.co/jondurbin/bagel-7b-v0.4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jondurbin__bagel-7b-v0.4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T12:18:51.743149](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__bagel-7b-v0.4/blob/main/results_2024-02-09T12-18-51.743149.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6224226507357447,\n \"acc_stderr\": 0.03300491139206905,\n \"acc_norm\": 0.6261475680953128,\n \"acc_norm_stderr\": 0.03367429602929055,\n \"mc1\": 0.3769889840881273,\n \"mc1_stderr\": 0.016965517578930354,\n \"mc2\": 0.5420385268751854,\n \"mc2_stderr\": 0.015218334200579092\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6015358361774744,\n \"acc_stderr\": 0.014306946052735567,\n \"acc_norm\": 0.6356655290102389,\n \"acc_norm_stderr\": 0.014063260279882419\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6235809599681338,\n \"acc_stderr\": 0.004834969412883641,\n \"acc_norm\": 0.826727743477395,\n \"acc_norm_stderr\": 0.0037770896070954763\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04244633238353227,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04244633238353227\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316091,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316091\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337135,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337135\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6597222222222222,\n \"acc_stderr\": 0.039621355734862175,\n \"acc_norm\": 0.6597222222222222,\n \"acc_norm_stderr\": 0.039621355734862175\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.03724249595817731,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.03724249595817731\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.46078431372549017,\n \"acc_stderr\": 0.04959859966384181,\n \"acc_norm\": 0.46078431372549017,\n \"acc_norm_stderr\": 0.04959859966384181\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.03255525359340354,\n \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.03255525359340354\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.041227371113703316,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.041227371113703316\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404904,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404904\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7645161290322581,\n \"acc_stderr\": 0.02413763242933771,\n \"acc_norm\": 0.7645161290322581,\n \"acc_norm_stderr\": 0.02413763242933771\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8497409326424871,\n \"acc_stderr\": 0.02578772318072387,\n \"acc_norm\": 0.8497409326424871,\n \"acc_norm_stderr\": 0.02578772318072387\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6153846153846154,\n \"acc_stderr\": 0.024666744915187208,\n \"acc_norm\": 0.6153846153846154,\n \"acc_norm_stderr\": 0.024666744915187208\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3925925925925926,\n \"acc_stderr\": 0.02977384701253297,\n \"acc_norm\": 0.3925925925925926,\n \"acc_norm_stderr\": 0.02977384701253297\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.031041941304059278,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.031041941304059278\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3973509933774834,\n \"acc_stderr\": 0.039955240076816806,\n \"acc_norm\": 0.3973509933774834,\n \"acc_norm_stderr\": 0.039955240076816806\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8256880733944955,\n \"acc_stderr\": 0.016265675632010344,\n \"acc_norm\": 0.8256880733944955,\n \"acc_norm_stderr\": 0.016265675632010344\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.03392238405321617,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.03392238405321617\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.02933116229425174,\n \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.02933116229425174\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7763713080168776,\n \"acc_stderr\": 0.027123298205229966,\n \"acc_norm\": 0.7763713080168776,\n \"acc_norm_stderr\": 0.027123298205229966\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6502242152466368,\n \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.6502242152466368,\n \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7355371900826446,\n \"acc_stderr\": 0.04026187527591205,\n \"acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.04026187527591205\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.03487825168497892,\n \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.03487825168497892\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.04058042015646034,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.04058042015646034\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.020930193185179337,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.020930193185179337\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8135376756066411,\n \"acc_stderr\": 0.013927751372001501,\n \"acc_norm\": 0.8135376756066411,\n \"acc_norm_stderr\": 0.013927751372001501\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.638728323699422,\n \"acc_stderr\": 0.02586220185227789,\n \"acc_norm\": 0.638728323699422,\n \"acc_norm_stderr\": 0.02586220185227789\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2927374301675978,\n \"acc_stderr\": 0.015218109544410184,\n \"acc_norm\": 0.2927374301675978,\n \"acc_norm_stderr\": 0.015218109544410184\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.02609016250427906,\n \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.02609016250427906\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6688102893890675,\n \"acc_stderr\": 0.026730620728004906,\n \"acc_norm\": 0.6688102893890675,\n \"acc_norm_stderr\": 0.026730620728004906\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6975308641975309,\n \"acc_stderr\": 0.025557653981868055,\n \"acc_norm\": 0.6975308641975309,\n \"acc_norm_stderr\": 0.025557653981868055\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.44680851063829785,\n \"acc_stderr\": 0.029658235097666907,\n \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.029658235097666907\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4276401564537158,\n \"acc_stderr\": 0.012635799922765846,\n \"acc_norm\": 0.4276401564537158,\n \"acc_norm_stderr\": 0.012635799922765846\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6617647058823529,\n \"acc_stderr\": 0.028739328513983572,\n \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.028739328513983572\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6421568627450981,\n \"acc_stderr\": 0.019393058402355442,\n \"acc_norm\": 0.6421568627450981,\n \"acc_norm_stderr\": 0.019393058402355442\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6938775510204082,\n \"acc_stderr\": 0.02950489645459596,\n \"acc_norm\": 0.6938775510204082,\n \"acc_norm_stderr\": 0.02950489645459596\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640044,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640044\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3769889840881273,\n \"mc1_stderr\": 0.016965517578930354,\n \"mc2\": 0.5420385268751854,\n \"mc2_stderr\": 0.015218334200579092\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7892659826361483,\n \"acc_stderr\": 0.011462046419710686\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.47308567096285065,\n \"acc_stderr\": 0.013752517189717465\n }\n}\n```", "repo_url": "https://huggingface.co/jondurbin/bagel-7b-v0.4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-18-51.743149.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["**/details_harness|winogrande|5_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T12-18-51.743149.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T12_18_51.743149", "path": ["results_2024-02-09T12-18-51.743149.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T12-18-51.743149.parquet"]}]}]}
2024-02-09T12:21:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of jondurbin/bagel-7b-v0.4 Dataset automatically created during the evaluation run of model jondurbin/bagel-7b-v0.4 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T12:18:51.743149(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of jondurbin/bagel-7b-v0.4\n\n\n\nDataset automatically created during the evaluation run of model jondurbin/bagel-7b-v0.4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:18:51.743149(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of jondurbin/bagel-7b-v0.4\n\n\n\nDataset automatically created during the evaluation run of model jondurbin/bagel-7b-v0.4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:18:51.743149(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
b16966b68eba8f44b6534bc8cd82e9fa02c53188
# Description Dataset created for Master's thesis "Detection of Catastrophic Events from Social Media" at the Slovak Technical University Faculty of Informatics. Contains posts from social media that are split into two categories: - Informative - related and informative in regards to natural disasters - Non-Informative - unrelated to natural disasters Other metadata include event type, source dataset etc. To balance classes, 50k tweets from twitter archive for years 2017-2022 were added. # Distributions ![Distributions](meta/distributions/split_distribution.png) # Source Datasets: | **Name** | **Count** | |:----------------------------------------------------------------------------------------:|:---------:| | Kaggle 1 - [URL](https://www.kaggle.com/datasets/jannesklaas/disasters-on-social-media) | 951 | | Kaggle 2 - [URL](https://www.kaggle.com/datasets/vstepanenko/disaster-tweets) | 579 | | Kaggle 3 - [URL](https://www.kaggle.com/datasets/sidharth178/disaster-response-messages) | 3782 | | Zahra et al. - [URL](https://doi.org/10.1016/j.ipm.2019.102107) | 6494 | | CrisisMMD - [URL](https://arxiv.org/abs/1805.00713) | 11043 | | Alam et al. - [URL](https://arxiv.org/abs/1805.05151) | 11133 | | CrisisLexT26 - [URL](https://doi.org/10.1145/2675133.2675242) | 14998 | | Imran et al. - [URL](https://aclanthology.org/L16-1259) | 16549 | | CrisisLexT6 - [URL](https://doi.org/10.1609/icwsm.v8i1.14538) | 22672 | | HumAID - [URL](https://doi.org/10.1609/icwsm.v15i1.18116) | 42837 | | CrisisBench - [URL](https://doi.org/10.1609/icwsm.v15i1.18115) | 31158 | | ArchiveTeam - [URL](https://archive.org/details/twitterstream) | 49191 | | **Total** | 211387 | # Total Event counts: | **Type** | **Non-Informative** | **Informative** | **Total** | |:----------:|:-------------------:|:---------------:|:---------:| | Unknown | 61880 | 14740 | 76620 | | Storm | 20944 | 47301 | 68245 | | Flood | 13104 | 14637 | 27741 | | Earthquake | 7844 | 15549 | 23393 | | Fire | 2343 | 8595 | 10938 | | Landslide | 2392 | 384 | 2776 | | Meteorite | 193 | 545 | 738 | | Haze | 51 | 503 | 554 | | Volcano | 243 | 139 | 382 |
melisekm/natural-disasters-from-social-media
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "size_categories:100K<n<1M", "source_datasets:Kaggle 1 - jannesklaas/disasters-on-social-media", "source_datasets:Kaggle 2 - vstepanenko/disaster-tweets", "source_datasets:Kaggle 3 - sidharth178/disaster-response-messages", "source_datasets:Zahra et al. - doi: 10.1016/j.ipm.2019.102107", "source_datasets:CrisisMMD - arxiv: 1805.00713", "source_datasets:Alam et al. - arxiv: 1805.05151", "source_datasets:CrisisLexT26 - doi: 10.1145/2675133.2675242", "source_datasets:Imran et al. - aclanthology: L16-1259", "source_datasets:CrisisLexT6 - doi: 10.1609/icwsm.v8i1.14538", "source_datasets:HumAID - doi: 10.1609/icwsm.v15i1.18116", "source_datasets:CrisisBench - doi: 10.1609/icwsm.v15i1.18115", "language:en", "natural disasters", "tweets", "classification", "catastrophic events", "arxiv:1805.00713", "arxiv:1805.05151", "region:us" ]
2024-02-09T12:23:39+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "size_categories": ["100K<n<1M"], "source_datasets": ["Kaggle 1 - jannesklaas/disasters-on-social-media", "Kaggle 2 - vstepanenko/disaster-tweets", "Kaggle 3 - sidharth178/disaster-response-messages", "Zahra et al. - doi: 10.1016/j.ipm.2019.102107", "CrisisMMD - arxiv: 1805.00713", "Alam et al. - arxiv: 1805.05151", "CrisisLexT26 - doi: 10.1145/2675133.2675242", "Imran et al. - aclanthology: L16-1259", "CrisisLexT6 - doi: 10.1609/icwsm.v8i1.14538", "HumAID - doi: 10.1609/icwsm.v15i1.18116", "CrisisBench - doi: 10.1609/icwsm.v15i1.18115"], "task_categories": ["text-classification"], "pretty_name": "Natural Disasters from Social Media", "tags": ["natural disasters", "tweets", "classification", "catastrophic events"], "configs": [{"config_name": "default", "default": true, "data_files": [{"split": "train", "path": "train.csv"}, {"split": "validation", "path": "validation.csv"}, {"split": "test", "path": "test.csv"}]}, {"config_name": "full", "data_files": "meta/natural-disasters-from-social-media.csv"}, {"config_name": "meta", "data_files": "meta/distributions/*.csv"}], "dataset_info": {"config_name": "default", "splits": [{"name": "train", "num_bytes": 39817704, "num_examples": 169109}, {"name": "validation", "num_bytes": 4977163, "num_examples": 21139}, {"name": "test", "num_bytes": 4981112, "num_examples": 21139}], "dataset_size": 49775824}}
2024-02-09T13:27:51+00:00
[ "1805.00713", "1805.05151" ]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #size_categories-100K<n<1M #source_datasets-Kaggle 1 - jannesklaas/disasters-on-social-media #source_datasets-Kaggle 2 - vstepanenko/disaster-tweets #source_datasets-Kaggle 3 - sidharth178/disaster-response-messages #source_datasets-Zahra et al. - doi- 10.1016/j.ipm.2019.102107 #source_datasets-CrisisMMD - arxiv- 1805.00713 #source_datasets-Alam et al. - arxiv- 1805.05151 #source_datasets-CrisisLexT26 - doi- 10.1145/2675133.2675242 #source_datasets-Imran et al. - aclanthology- L16-1259 #source_datasets-CrisisLexT6 - doi- 10.1609/icwsm.v8i1.14538 #source_datasets-HumAID - doi- 10.1609/icwsm.v15i1.18116 #source_datasets-CrisisBench - doi- 10.1609/icwsm.v15i1.18115 #language-English #natural disasters #tweets #classification #catastrophic events #arxiv-1805.00713 #arxiv-1805.05151 #region-us
Description =========== Dataset created for Master's thesis "Detection of Catastrophic Events from Social Media" at the Slovak Technical University Faculty of Informatics. Contains posts from social media that are split into two categories: * Informative - related and informative in regards to natural disasters * Non-Informative - unrelated to natural disasters Other metadata include event type, source dataset etc. To balance classes, 50k tweets from twitter archive for years 2017-2022 were added. Distributions ============= !Distributions Source Datasets: ================ Total Event counts: ===================
[]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #size_categories-100K<n<1M #source_datasets-Kaggle 1 - jannesklaas/disasters-on-social-media #source_datasets-Kaggle 2 - vstepanenko/disaster-tweets #source_datasets-Kaggle 3 - sidharth178/disaster-response-messages #source_datasets-Zahra et al. - doi- 10.1016/j.ipm.2019.102107 #source_datasets-CrisisMMD - arxiv- 1805.00713 #source_datasets-Alam et al. - arxiv- 1805.05151 #source_datasets-CrisisLexT26 - doi- 10.1145/2675133.2675242 #source_datasets-Imran et al. - aclanthology- L16-1259 #source_datasets-CrisisLexT6 - doi- 10.1609/icwsm.v8i1.14538 #source_datasets-HumAID - doi- 10.1609/icwsm.v15i1.18116 #source_datasets-CrisisBench - doi- 10.1609/icwsm.v15i1.18115 #language-English #natural disasters #tweets #classification #catastrophic events #arxiv-1805.00713 #arxiv-1805.05151 #region-us \n" ]
f2e8c61e64491b73cd55a96d1429ddbfda7edb0c
# Dataset Card for Evaluation run of jondurbin/bagel-dpo-7b-v0.4 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [jondurbin/bagel-dpo-7b-v0.4](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_jondurbin__bagel-dpo-7b-v0.4", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T12:26:08.289563](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__bagel-dpo-7b-v0.4/blob/main/results_2024-02-09T12-26-08.289563.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6206724451364786, "acc_stderr": 0.0329664063441869, "acc_norm": 0.6242837256806741, "acc_norm_stderr": 0.03363029941343461, "mc1": 0.4749082007343941, "mc1_stderr": 0.017481446804104007, "mc2": 0.6394319602785546, "mc2_stderr": 0.01516560925754018 }, "harness|arc:challenge|25": { "acc": 0.6493174061433447, "acc_stderr": 0.013944635930726096, "acc_norm": 0.6757679180887372, "acc_norm_stderr": 0.013678810399518822 }, "harness|hellaswag|10": { "acc": 0.6477793268273252, "acc_stderr": 0.00476686090717154, "acc_norm": 0.8429595698068114, "acc_norm_stderr": 0.0036309529998437306 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.23, "acc_stderr": 0.04229525846816506, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5851851851851851, "acc_stderr": 0.04256193767901408, "acc_norm": 0.5851851851851851, "acc_norm_stderr": 0.04256193767901408 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6644736842105263, "acc_stderr": 0.03842498559395268, "acc_norm": 0.6644736842105263, "acc_norm_stderr": 0.03842498559395268 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.56, "acc_stderr": 0.04988876515698589, "acc_norm": 0.56, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7018867924528301, "acc_stderr": 0.028152837942493857, "acc_norm": 0.7018867924528301, "acc_norm_stderr": 0.028152837942493857 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6597222222222222, "acc_stderr": 0.039621355734862175, "acc_norm": 0.6597222222222222, "acc_norm_stderr": 0.039621355734862175 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956913, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956913 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.048783173121456344, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456344 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6184971098265896, "acc_stderr": 0.03703851193099521, "acc_norm": 0.6184971098265896, "acc_norm_stderr": 0.03703851193099521 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287534, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287534 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5276595744680851, "acc_stderr": 0.03263597118409769, "acc_norm": 0.5276595744680851, "acc_norm_stderr": 0.03263597118409769 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5087719298245614, "acc_stderr": 0.04702880432049615, "acc_norm": 0.5087719298245614, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5586206896551724, "acc_stderr": 0.04137931034482758, "acc_norm": 0.5586206896551724, "acc_norm_stderr": 0.04137931034482758 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4021164021164021, "acc_stderr": 0.025253032554997692, "acc_norm": 0.4021164021164021, "acc_norm_stderr": 0.025253032554997692 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.48412698412698413, "acc_stderr": 0.04469881854072606, "acc_norm": 0.48412698412698413, "acc_norm_stderr": 0.04469881854072606 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7548387096774194, "acc_stderr": 0.02447224384089554, "acc_norm": 0.7548387096774194, "acc_norm_stderr": 0.02447224384089554 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5073891625615764, "acc_stderr": 0.035176035403610105, "acc_norm": 0.5073891625615764, "acc_norm_stderr": 0.035176035403610105 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7454545454545455, "acc_stderr": 0.03401506715249039, "acc_norm": 0.7454545454545455, "acc_norm_stderr": 0.03401506715249039 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.02886977846026704, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.02886977846026704 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8341968911917098, "acc_stderr": 0.026839845022314415, "acc_norm": 0.8341968911917098, "acc_norm_stderr": 0.026839845022314415 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6205128205128205, "acc_stderr": 0.024603626924097417, "acc_norm": 0.6205128205128205, "acc_norm_stderr": 0.024603626924097417 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34814814814814815, "acc_stderr": 0.029045600290616258, "acc_norm": 0.34814814814814815, "acc_norm_stderr": 0.029045600290616258 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6470588235294118, "acc_stderr": 0.031041941304059278, "acc_norm": 0.6470588235294118, "acc_norm_stderr": 0.031041941304059278 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3841059602649007, "acc_stderr": 0.03971301814719197, "acc_norm": 0.3841059602649007, "acc_norm_stderr": 0.03971301814719197 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8348623853211009, "acc_stderr": 0.01591955782997606, "acc_norm": 0.8348623853211009, "acc_norm_stderr": 0.01591955782997606 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5648148148148148, "acc_stderr": 0.03381200005643525, "acc_norm": 0.5648148148148148, "acc_norm_stderr": 0.03381200005643525 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7745098039215687, "acc_stderr": 0.02933116229425174, "acc_norm": 0.7745098039215687, "acc_norm_stderr": 0.02933116229425174 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7932489451476793, "acc_stderr": 0.026361651668389094, "acc_norm": 0.7932489451476793, "acc_norm_stderr": 0.026361651668389094 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6681614349775785, "acc_stderr": 0.03160295143776679, "acc_norm": 0.6681614349775785, "acc_norm_stderr": 0.03160295143776679 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7251908396946565, "acc_stderr": 0.03915345408847835, "acc_norm": 0.7251908396946565, "acc_norm_stderr": 0.03915345408847835 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7355371900826446, "acc_stderr": 0.04026187527591205, "acc_norm": 0.7355371900826446, "acc_norm_stderr": 0.04026187527591205 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.75, "acc_stderr": 0.04186091791394607, "acc_norm": 0.75, "acc_norm_stderr": 0.04186091791394607 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7300613496932515, "acc_stderr": 0.03487825168497892, "acc_norm": 0.7300613496932515, "acc_norm_stderr": 0.03487825168497892 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.41964285714285715, "acc_stderr": 0.046840993210771065, "acc_norm": 0.41964285714285715, "acc_norm_stderr": 0.046840993210771065 }, "harness|hendrycksTest-management|5": { "acc": 0.8058252427184466, "acc_stderr": 0.03916667762822585, "acc_norm": 0.8058252427184466, "acc_norm_stderr": 0.03916667762822585 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8803418803418803, "acc_stderr": 0.021262719400406957, "acc_norm": 0.8803418803418803, "acc_norm_stderr": 0.021262719400406957 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.64, "acc_stderr": 0.04824181513244218, "acc_norm": 0.64, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.80970625798212, "acc_stderr": 0.014036945850381394, "acc_norm": 0.80970625798212, "acc_norm_stderr": 0.014036945850381394 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6416184971098265, "acc_stderr": 0.025816756791584204, "acc_norm": 0.6416184971098265, "acc_norm_stderr": 0.025816756791584204 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.32849162011173183, "acc_stderr": 0.01570793539849645, "acc_norm": 0.32849162011173183, "acc_norm_stderr": 0.01570793539849645 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6699346405228758, "acc_stderr": 0.026925654653615693, "acc_norm": 0.6699346405228758, "acc_norm_stderr": 0.026925654653615693 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.662379421221865, "acc_stderr": 0.026858825879488533, "acc_norm": 0.662379421221865, "acc_norm_stderr": 0.026858825879488533 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7037037037037037, "acc_stderr": 0.025407197798890162, "acc_norm": 0.7037037037037037, "acc_norm_stderr": 0.025407197798890162 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.44680851063829785, "acc_stderr": 0.029658235097666907, "acc_norm": 0.44680851063829785, "acc_norm_stderr": 0.029658235097666907 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4217731421121252, "acc_stderr": 0.012612974369390984, "acc_norm": 0.4217731421121252, "acc_norm_stderr": 0.012612974369390984 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6911764705882353, "acc_stderr": 0.028064998167040094, "acc_norm": 0.6911764705882353, "acc_norm_stderr": 0.028064998167040094 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6388888888888888, "acc_stderr": 0.01943177567703731, "acc_norm": 0.6388888888888888, "acc_norm_stderr": 0.01943177567703731 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7142857142857143, "acc_stderr": 0.0289205832206756, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.0289205832206756 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8109452736318408, "acc_stderr": 0.02768691358801302, "acc_norm": 0.8109452736318408, "acc_norm_stderr": 0.02768691358801302 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.04020151261036846, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036846 }, "harness|hendrycksTest-virology|5": { "acc": 0.5120481927710844, "acc_stderr": 0.03891364495835817, "acc_norm": 0.5120481927710844, "acc_norm_stderr": 0.03891364495835817 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8011695906432749, "acc_stderr": 0.030611116557432528, "acc_norm": 0.8011695906432749, "acc_norm_stderr": 0.030611116557432528 }, "harness|truthfulqa:mc|0": { "mc1": 0.4749082007343941, "mc1_stderr": 0.017481446804104007, "mc2": 0.6394319602785546, "mc2_stderr": 0.01516560925754018 }, "harness|winogrande|5": { "acc": 0.7813733228097869, "acc_stderr": 0.011616198215773223 }, "harness|gsm8k|5": { "acc": 0.46853677028051555, "acc_stderr": 0.013745189948450417 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_jondurbin__bagel-dpo-7b-v0.4
[ "region:us" ]
2024-02-09T12:28:25+00:00
{"pretty_name": "Evaluation run of jondurbin/bagel-dpo-7b-v0.4", "dataset_summary": "Dataset automatically created during the evaluation run of model [jondurbin/bagel-dpo-7b-v0.4](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jondurbin__bagel-dpo-7b-v0.4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T12:26:08.289563](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__bagel-dpo-7b-v0.4/blob/main/results_2024-02-09T12-26-08.289563.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6206724451364786,\n \"acc_stderr\": 0.0329664063441869,\n \"acc_norm\": 0.6242837256806741,\n \"acc_norm_stderr\": 0.03363029941343461,\n \"mc1\": 0.4749082007343941,\n \"mc1_stderr\": 0.017481446804104007,\n \"mc2\": 0.6394319602785546,\n \"mc2_stderr\": 0.01516560925754018\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6493174061433447,\n \"acc_stderr\": 0.013944635930726096,\n \"acc_norm\": 0.6757679180887372,\n \"acc_norm_stderr\": 0.013678810399518822\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6477793268273252,\n \"acc_stderr\": 0.00476686090717154,\n \"acc_norm\": 0.8429595698068114,\n \"acc_norm_stderr\": 0.0036309529998437306\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.03842498559395268,\n \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.03842498559395268\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493857,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493857\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6597222222222222,\n \"acc_stderr\": 0.039621355734862175,\n \"acc_norm\": 0.6597222222222222,\n \"acc_norm_stderr\": 0.039621355734862175\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456344,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456344\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482758,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482758\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4021164021164021,\n \"acc_stderr\": 0.025253032554997692,\n \"acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.025253032554997692\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7548387096774194,\n \"acc_stderr\": 0.02447224384089554,\n \"acc_norm\": 0.7548387096774194,\n \"acc_norm_stderr\": 0.02447224384089554\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026704,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026704\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6205128205128205,\n \"acc_stderr\": 0.024603626924097417,\n \"acc_norm\": 0.6205128205128205,\n \"acc_norm_stderr\": 0.024603626924097417\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616258,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616258\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.031041941304059278,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.031041941304059278\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3841059602649007,\n \"acc_stderr\": 0.03971301814719197,\n \"acc_norm\": 0.3841059602649007,\n \"acc_norm_stderr\": 0.03971301814719197\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8348623853211009,\n \"acc_stderr\": 0.01591955782997606,\n \"acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.01591955782997606\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5648148148148148,\n \"acc_stderr\": 0.03381200005643525,\n \"acc_norm\": 0.5648148148148148,\n \"acc_norm_stderr\": 0.03381200005643525\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.02933116229425174,\n \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.02933116229425174\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7932489451476793,\n \"acc_stderr\": 0.026361651668389094,\n \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.026361651668389094\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.03915345408847835,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.03915345408847835\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7355371900826446,\n \"acc_stderr\": 0.04026187527591205,\n \"acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.04026187527591205\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.03487825168497892,\n \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.03487825168497892\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n \"acc_stderr\": 0.046840993210771065,\n \"acc_norm\": 0.41964285714285715,\n \"acc_norm_stderr\": 0.046840993210771065\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822585,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822585\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406957,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406957\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.80970625798212,\n \"acc_stderr\": 0.014036945850381394,\n \"acc_norm\": 0.80970625798212,\n \"acc_norm_stderr\": 0.014036945850381394\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6416184971098265,\n \"acc_stderr\": 0.025816756791584204,\n \"acc_norm\": 0.6416184971098265,\n \"acc_norm_stderr\": 0.025816756791584204\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32849162011173183,\n \"acc_stderr\": 0.01570793539849645,\n \"acc_norm\": 0.32849162011173183,\n \"acc_norm_stderr\": 0.01570793539849645\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6699346405228758,\n \"acc_stderr\": 0.026925654653615693,\n \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.026925654653615693\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.662379421221865,\n \"acc_stderr\": 0.026858825879488533,\n \"acc_norm\": 0.662379421221865,\n \"acc_norm_stderr\": 0.026858825879488533\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.025407197798890162,\n \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.025407197798890162\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.44680851063829785,\n \"acc_stderr\": 0.029658235097666907,\n \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.029658235097666907\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4217731421121252,\n \"acc_stderr\": 0.012612974369390984,\n \"acc_norm\": 0.4217731421121252,\n \"acc_norm_stderr\": 0.012612974369390984\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.028064998167040094,\n \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.028064998167040094\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.01943177567703731,\n \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.01943177567703731\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n \"acc_stderr\": 0.02768691358801302,\n \"acc_norm\": 0.8109452736318408,\n \"acc_norm_stderr\": 0.02768691358801302\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8011695906432749,\n \"acc_stderr\": 0.030611116557432528,\n \"acc_norm\": 0.8011695906432749,\n \"acc_norm_stderr\": 0.030611116557432528\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4749082007343941,\n \"mc1_stderr\": 0.017481446804104007,\n \"mc2\": 0.6394319602785546,\n \"mc2_stderr\": 0.01516560925754018\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7813733228097869,\n \"acc_stderr\": 0.011616198215773223\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.46853677028051555,\n \"acc_stderr\": 0.013745189948450417\n }\n}\n```", "repo_url": "https://huggingface.co/jondurbin/bagel-dpo-7b-v0.4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-26-08.289563.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["**/details_harness|winogrande|5_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T12-26-08.289563.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T12_26_08.289563", "path": ["results_2024-02-09T12-26-08.289563.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T12-26-08.289563.parquet"]}]}]}
2024-02-09T12:28:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of jondurbin/bagel-dpo-7b-v0.4 Dataset automatically created during the evaluation run of model jondurbin/bagel-dpo-7b-v0.4 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T12:26:08.289563(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of jondurbin/bagel-dpo-7b-v0.4\n\n\n\nDataset automatically created during the evaluation run of model jondurbin/bagel-dpo-7b-v0.4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:26:08.289563(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of jondurbin/bagel-dpo-7b-v0.4\n\n\n\nDataset automatically created during the evaluation run of model jondurbin/bagel-dpo-7b-v0.4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:26:08.289563(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
b127778caae6638338873ab5d0913c7b37c91a7b
# Dataset Card for Evaluation run of ericpolewski/Palworld-SME-13b <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [ericpolewski/Palworld-SME-13b](https://huggingface.co/ericpolewski/Palworld-SME-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_ericpolewski__Palworld-SME-13b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T12:30:34.834503](https://huggingface.co/datasets/open-llm-leaderboard/details_ericpolewski__Palworld-SME-13b/blob/main/results_2024-02-09T12-30-34.834503.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.532296003908677, "acc_stderr": 0.033825002823228846, "acc_norm": 0.5413466673673525, "acc_norm_stderr": 0.034679022812202726, "mc1": 0.3243574051407589, "mc1_stderr": 0.016387976779647935, "mc2": 0.4666625095183999, "mc2_stderr": 0.015175138209414976 }, "harness|arc:challenge|25": { "acc": 0.5162116040955631, "acc_stderr": 0.014603708567414945, "acc_norm": 0.5554607508532423, "acc_norm_stderr": 0.014521226405627075 }, "harness|hellaswag|10": { "acc": 0.6077474606652061, "acc_stderr": 0.004872546302641848, "acc_norm": 0.808105954989046, "acc_norm_stderr": 0.003929854025801025 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.3, "acc_stderr": 0.04605661864718381, "acc_norm": 0.3, "acc_norm_stderr": 0.04605661864718381 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4666666666666667, "acc_stderr": 0.043097329010363554, "acc_norm": 0.4666666666666667, "acc_norm_stderr": 0.043097329010363554 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.5, "acc_stderr": 0.04068942293855797, "acc_norm": 0.5, "acc_norm_stderr": 0.04068942293855797 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.56, "acc_stderr": 0.04988876515698589, "acc_norm": 0.56, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5924528301886792, "acc_stderr": 0.030242233800854498, "acc_norm": 0.5924528301886792, "acc_norm_stderr": 0.030242233800854498 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5486111111111112, "acc_stderr": 0.04161402398403279, "acc_norm": 0.5486111111111112, "acc_norm_stderr": 0.04161402398403279 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.35, "acc_stderr": 0.04793724854411019, "acc_norm": 0.35, "acc_norm_stderr": 0.04793724854411019 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.4277456647398844, "acc_stderr": 0.037724468575180255, "acc_norm": 0.4277456647398844, "acc_norm_stderr": 0.037724468575180255 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3137254901960784, "acc_stderr": 0.04617034827006716, "acc_norm": 0.3137254901960784, "acc_norm_stderr": 0.04617034827006716 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.39574468085106385, "acc_stderr": 0.031967586978353627, "acc_norm": 0.39574468085106385, "acc_norm_stderr": 0.031967586978353627 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.2631578947368421, "acc_stderr": 0.04142439719489361, "acc_norm": 0.2631578947368421, "acc_norm_stderr": 0.04142439719489361 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.4689655172413793, "acc_stderr": 0.04158632762097828, "acc_norm": 0.4689655172413793, "acc_norm_stderr": 0.04158632762097828 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30687830687830686, "acc_stderr": 0.023752928712112133, "acc_norm": 0.30687830687830686, "acc_norm_stderr": 0.023752928712112133 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.36507936507936506, "acc_stderr": 0.04306241259127153, "acc_norm": 0.36507936507936506, "acc_norm_stderr": 0.04306241259127153 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.39, "acc_stderr": 0.049020713000019756, "acc_norm": 0.39, "acc_norm_stderr": 0.049020713000019756 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6258064516129033, "acc_stderr": 0.0275289042998457, "acc_norm": 0.6258064516129033, "acc_norm_stderr": 0.0275289042998457 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.3694581280788177, "acc_stderr": 0.03395970381998574, "acc_norm": 0.3694581280788177, "acc_norm_stderr": 0.03395970381998574 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.55, "acc_stderr": 0.049999999999999996, "acc_norm": 0.55, "acc_norm_stderr": 0.049999999999999996 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.6545454545454545, "acc_stderr": 0.03713158067481913, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.03713158067481913 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.6666666666666666, "acc_stderr": 0.033586181457325226, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.033586181457325226 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7875647668393783, "acc_stderr": 0.029519282616817234, "acc_norm": 0.7875647668393783, "acc_norm_stderr": 0.029519282616817234 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5, "acc_stderr": 0.02535100632816969, "acc_norm": 0.5, "acc_norm_stderr": 0.02535100632816969 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3074074074074074, "acc_stderr": 0.028133252578815635, "acc_norm": 0.3074074074074074, "acc_norm_stderr": 0.028133252578815635 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6008403361344538, "acc_stderr": 0.03181110032413925, "acc_norm": 0.6008403361344538, "acc_norm_stderr": 0.03181110032413925 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.39072847682119205, "acc_stderr": 0.03983798306659809, "acc_norm": 0.39072847682119205, "acc_norm_stderr": 0.03983798306659809 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7376146788990826, "acc_stderr": 0.018861885021534734, "acc_norm": 0.7376146788990826, "acc_norm_stderr": 0.018861885021534734 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4074074074074074, "acc_stderr": 0.03350991604696043, "acc_norm": 0.4074074074074074, "acc_norm_stderr": 0.03350991604696043 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7696078431372549, "acc_stderr": 0.029554292605695066, "acc_norm": 0.7696078431372549, "acc_norm_stderr": 0.029554292605695066 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7552742616033755, "acc_stderr": 0.027985699387036416, "acc_norm": 0.7552742616033755, "acc_norm_stderr": 0.027985699387036416 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6188340807174888, "acc_stderr": 0.03259625118416828, "acc_norm": 0.6188340807174888, "acc_norm_stderr": 0.03259625118416828 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6183206106870229, "acc_stderr": 0.042607351576445594, "acc_norm": 0.6183206106870229, "acc_norm_stderr": 0.042607351576445594 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7024793388429752, "acc_stderr": 0.04173349148083499, "acc_norm": 0.7024793388429752, "acc_norm_stderr": 0.04173349148083499 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7222222222222222, "acc_stderr": 0.04330043749650742, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.04330043749650742 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6441717791411042, "acc_stderr": 0.03761521380046734, "acc_norm": 0.6441717791411042, "acc_norm_stderr": 0.03761521380046734 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.39285714285714285, "acc_stderr": 0.04635550135609976, "acc_norm": 0.39285714285714285, "acc_norm_stderr": 0.04635550135609976 }, "harness|hendrycksTest-management|5": { "acc": 0.7184466019417476, "acc_stderr": 0.044532548363264673, "acc_norm": 0.7184466019417476, "acc_norm_stderr": 0.044532548363264673 }, "harness|hendrycksTest-marketing|5": { "acc": 0.7863247863247863, "acc_stderr": 0.02685345037700917, "acc_norm": 0.7863247863247863, "acc_norm_stderr": 0.02685345037700917 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.719029374201788, "acc_stderr": 0.01607312785122122, "acc_norm": 0.719029374201788, "acc_norm_stderr": 0.01607312785122122 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5982658959537572, "acc_stderr": 0.026394104177643634, "acc_norm": 0.5982658959537572, "acc_norm_stderr": 0.026394104177643634 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3094972067039106, "acc_stderr": 0.01546116900237154, "acc_norm": 0.3094972067039106, "acc_norm_stderr": 0.01546116900237154 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5620915032679739, "acc_stderr": 0.02840830202033269, "acc_norm": 0.5620915032679739, "acc_norm_stderr": 0.02840830202033269 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6302250803858521, "acc_stderr": 0.027417996705630995, "acc_norm": 0.6302250803858521, "acc_norm_stderr": 0.027417996705630995 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6450617283950617, "acc_stderr": 0.02662415247884585, "acc_norm": 0.6450617283950617, "acc_norm_stderr": 0.02662415247884585 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.40070921985815605, "acc_stderr": 0.029233465745573086, "acc_norm": 0.40070921985815605, "acc_norm_stderr": 0.029233465745573086 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4426336375488918, "acc_stderr": 0.01268590653820624, "acc_norm": 0.4426336375488918, "acc_norm_stderr": 0.01268590653820624 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5147058823529411, "acc_stderr": 0.03035969707904612, "acc_norm": 0.5147058823529411, "acc_norm_stderr": 0.03035969707904612 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5490196078431373, "acc_stderr": 0.020130388312904524, "acc_norm": 0.5490196078431373, "acc_norm_stderr": 0.020130388312904524 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6181818181818182, "acc_stderr": 0.046534298079135075, "acc_norm": 0.6181818181818182, "acc_norm_stderr": 0.046534298079135075 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5510204081632653, "acc_stderr": 0.03184213866687579, "acc_norm": 0.5510204081632653, "acc_norm_stderr": 0.03184213866687579 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6915422885572139, "acc_stderr": 0.03265819588512697, "acc_norm": 0.6915422885572139, "acc_norm_stderr": 0.03265819588512697 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-virology|5": { "acc": 0.4397590361445783, "acc_stderr": 0.03864139923699122, "acc_norm": 0.4397590361445783, "acc_norm_stderr": 0.03864139923699122 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7485380116959064, "acc_stderr": 0.033275044238468436, "acc_norm": 0.7485380116959064, "acc_norm_stderr": 0.033275044238468436 }, "harness|truthfulqa:mc|0": { "mc1": 0.3243574051407589, "mc1_stderr": 0.016387976779647935, "mc2": 0.4666625095183999, "mc2_stderr": 0.015175138209414976 }, "harness|winogrande|5": { "acc": 0.7482241515390686, "acc_stderr": 0.012198489100259781 }, "harness|gsm8k|5": { "acc": 0.021986353297952996, "acc_stderr": 0.004039162758110039 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_ericpolewski__Palworld-SME-13b
[ "region:us" ]
2024-02-09T12:32:56+00:00
{"pretty_name": "Evaluation run of ericpolewski/Palworld-SME-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [ericpolewski/Palworld-SME-13b](https://huggingface.co/ericpolewski/Palworld-SME-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ericpolewski__Palworld-SME-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T12:30:34.834503](https://huggingface.co/datasets/open-llm-leaderboard/details_ericpolewski__Palworld-SME-13b/blob/main/results_2024-02-09T12-30-34.834503.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.532296003908677,\n \"acc_stderr\": 0.033825002823228846,\n \"acc_norm\": 0.5413466673673525,\n \"acc_norm_stderr\": 0.034679022812202726,\n \"mc1\": 0.3243574051407589,\n \"mc1_stderr\": 0.016387976779647935,\n \"mc2\": 0.4666625095183999,\n \"mc2_stderr\": 0.015175138209414976\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5162116040955631,\n \"acc_stderr\": 0.014603708567414945,\n \"acc_norm\": 0.5554607508532423,\n \"acc_norm_stderr\": 0.014521226405627075\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6077474606652061,\n \"acc_stderr\": 0.004872546302641848,\n \"acc_norm\": 0.808105954989046,\n \"acc_norm_stderr\": 0.003929854025801025\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.04605661864718381,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.04605661864718381\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04068942293855797,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04068942293855797\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5924528301886792,\n \"acc_stderr\": 0.030242233800854498,\n \"acc_norm\": 0.5924528301886792,\n \"acc_norm_stderr\": 0.030242233800854498\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5486111111111112,\n \"acc_stderr\": 0.04161402398403279,\n \"acc_norm\": 0.5486111111111112,\n \"acc_norm_stderr\": 0.04161402398403279\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4277456647398844,\n \"acc_stderr\": 0.037724468575180255,\n \"acc_norm\": 0.4277456647398844,\n \"acc_norm_stderr\": 0.037724468575180255\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3137254901960784,\n \"acc_stderr\": 0.04617034827006716,\n \"acc_norm\": 0.3137254901960784,\n \"acc_norm_stderr\": 0.04617034827006716\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.39574468085106385,\n \"acc_stderr\": 0.031967586978353627,\n \"acc_norm\": 0.39574468085106385,\n \"acc_norm_stderr\": 0.031967586978353627\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.04142439719489361,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.04142439719489361\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30687830687830686,\n \"acc_stderr\": 0.023752928712112133,\n \"acc_norm\": 0.30687830687830686,\n \"acc_norm_stderr\": 0.023752928712112133\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.049020713000019756,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.049020713000019756\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6258064516129033,\n \"acc_stderr\": 0.0275289042998457,\n \"acc_norm\": 0.6258064516129033,\n \"acc_norm_stderr\": 0.0275289042998457\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3694581280788177,\n \"acc_stderr\": 0.03395970381998574,\n \"acc_norm\": 0.3694581280788177,\n \"acc_norm_stderr\": 0.03395970381998574\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.03713158067481913,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.03713158067481913\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.033586181457325226,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.033586181457325226\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7875647668393783,\n \"acc_stderr\": 0.029519282616817234,\n \"acc_norm\": 0.7875647668393783,\n \"acc_norm_stderr\": 0.029519282616817234\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.02535100632816969,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.02535100632816969\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3074074074074074,\n \"acc_stderr\": 0.028133252578815635,\n \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.028133252578815635\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6008403361344538,\n \"acc_stderr\": 0.03181110032413925,\n \"acc_norm\": 0.6008403361344538,\n \"acc_norm_stderr\": 0.03181110032413925\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.39072847682119205,\n \"acc_stderr\": 0.03983798306659809,\n \"acc_norm\": 0.39072847682119205,\n \"acc_norm_stderr\": 0.03983798306659809\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7376146788990826,\n \"acc_stderr\": 0.018861885021534734,\n \"acc_norm\": 0.7376146788990826,\n \"acc_norm_stderr\": 0.018861885021534734\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.03350991604696043,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.03350991604696043\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7696078431372549,\n \"acc_stderr\": 0.029554292605695066,\n \"acc_norm\": 0.7696078431372549,\n \"acc_norm_stderr\": 0.029554292605695066\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7552742616033755,\n \"acc_stderr\": 0.027985699387036416,\n \"acc_norm\": 0.7552742616033755,\n \"acc_norm_stderr\": 0.027985699387036416\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6188340807174888,\n \"acc_stderr\": 0.03259625118416828,\n \"acc_norm\": 0.6188340807174888,\n \"acc_norm_stderr\": 0.03259625118416828\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6183206106870229,\n \"acc_stderr\": 0.042607351576445594,\n \"acc_norm\": 0.6183206106870229,\n \"acc_norm_stderr\": 0.042607351576445594\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7024793388429752,\n \"acc_stderr\": 0.04173349148083499,\n \"acc_norm\": 0.7024793388429752,\n \"acc_norm_stderr\": 0.04173349148083499\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.04330043749650742,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.04330043749650742\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6441717791411042,\n \"acc_stderr\": 0.03761521380046734,\n \"acc_norm\": 0.6441717791411042,\n \"acc_norm_stderr\": 0.03761521380046734\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n \"acc_stderr\": 0.04635550135609976,\n \"acc_norm\": 0.39285714285714285,\n \"acc_norm_stderr\": 0.04635550135609976\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.044532548363264673,\n \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.044532548363264673\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7863247863247863,\n \"acc_stderr\": 0.02685345037700917,\n \"acc_norm\": 0.7863247863247863,\n \"acc_norm_stderr\": 0.02685345037700917\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.719029374201788,\n \"acc_stderr\": 0.01607312785122122,\n \"acc_norm\": 0.719029374201788,\n \"acc_norm_stderr\": 0.01607312785122122\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5982658959537572,\n \"acc_stderr\": 0.026394104177643634,\n \"acc_norm\": 0.5982658959537572,\n \"acc_norm_stderr\": 0.026394104177643634\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3094972067039106,\n \"acc_stderr\": 0.01546116900237154,\n \"acc_norm\": 0.3094972067039106,\n \"acc_norm_stderr\": 0.01546116900237154\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5620915032679739,\n \"acc_stderr\": 0.02840830202033269,\n \"acc_norm\": 0.5620915032679739,\n \"acc_norm_stderr\": 0.02840830202033269\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6302250803858521,\n \"acc_stderr\": 0.027417996705630995,\n \"acc_norm\": 0.6302250803858521,\n \"acc_norm_stderr\": 0.027417996705630995\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6450617283950617,\n \"acc_stderr\": 0.02662415247884585,\n \"acc_norm\": 0.6450617283950617,\n \"acc_norm_stderr\": 0.02662415247884585\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.40070921985815605,\n \"acc_stderr\": 0.029233465745573086,\n \"acc_norm\": 0.40070921985815605,\n \"acc_norm_stderr\": 0.029233465745573086\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4426336375488918,\n \"acc_stderr\": 0.01268590653820624,\n \"acc_norm\": 0.4426336375488918,\n \"acc_norm_stderr\": 0.01268590653820624\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5147058823529411,\n \"acc_stderr\": 0.03035969707904612,\n \"acc_norm\": 0.5147058823529411,\n \"acc_norm_stderr\": 0.03035969707904612\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5490196078431373,\n \"acc_stderr\": 0.020130388312904524,\n \"acc_norm\": 0.5490196078431373,\n \"acc_norm_stderr\": 0.020130388312904524\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5510204081632653,\n \"acc_stderr\": 0.03184213866687579,\n \"acc_norm\": 0.5510204081632653,\n \"acc_norm_stderr\": 0.03184213866687579\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6915422885572139,\n \"acc_stderr\": 0.03265819588512697,\n \"acc_norm\": 0.6915422885572139,\n \"acc_norm_stderr\": 0.03265819588512697\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4397590361445783,\n \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.4397590361445783,\n \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7485380116959064,\n \"acc_stderr\": 0.033275044238468436,\n \"acc_norm\": 0.7485380116959064,\n \"acc_norm_stderr\": 0.033275044238468436\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3243574051407589,\n \"mc1_stderr\": 0.016387976779647935,\n \"mc2\": 0.4666625095183999,\n \"mc2_stderr\": 0.015175138209414976\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7482241515390686,\n \"acc_stderr\": 0.012198489100259781\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.021986353297952996,\n \"acc_stderr\": 0.004039162758110039\n }\n}\n```", "repo_url": "https://huggingface.co/ericpolewski/Palworld-SME-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-30-34.834503.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["**/details_harness|winogrande|5_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T12-30-34.834503.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T12_30_34.834503", "path": ["results_2024-02-09T12-30-34.834503.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T12-30-34.834503.parquet"]}]}]}
2024-02-09T12:33:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of ericpolewski/Palworld-SME-13b Dataset automatically created during the evaluation run of model ericpolewski/Palworld-SME-13b on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T12:30:34.834503(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of ericpolewski/Palworld-SME-13b\n\n\n\nDataset automatically created during the evaluation run of model ericpolewski/Palworld-SME-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:30:34.834503(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of ericpolewski/Palworld-SME-13b\n\n\n\nDataset automatically created during the evaluation run of model ericpolewski/Palworld-SME-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:30:34.834503(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
9a1c69429728bf5471edd91755f634ab1383d8c1
# Dataset Card for Evaluation run of manishiitg/open-aditi-hi-v2 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [manishiitg/open-aditi-hi-v2](https://huggingface.co/manishiitg/open-aditi-hi-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_manishiitg__open-aditi-hi-v2", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T12:31:28.234042](https://huggingface.co/datasets/open-llm-leaderboard/details_manishiitg__open-aditi-hi-v2/blob/main/results_2024-02-09T12-31-28.234042.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6107466762705897, "acc_stderr": 0.032739376568698436, "acc_norm": 0.6172844425941606, "acc_norm_stderr": 0.0334154029479238, "mc1": 0.2974296205630355, "mc1_stderr": 0.01600265148736101, "mc2": 0.45839650469314347, "mc2_stderr": 0.014589308437993127 }, "harness|arc:challenge|25": { "acc": 0.5554607508532423, "acc_stderr": 0.01452122640562708, "acc_norm": 0.5938566552901023, "acc_norm_stderr": 0.014351656690097863 }, "harness|hellaswag|10": { "acc": 0.622087233618801, "acc_stderr": 0.004838747305783345, "acc_norm": 0.8200557657837084, "acc_norm_stderr": 0.0038335592281586663 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.562962962962963, "acc_stderr": 0.042849586397534015, "acc_norm": 0.562962962962963, "acc_norm_stderr": 0.042849586397534015 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6578947368421053, "acc_stderr": 0.03860731599316092, "acc_norm": 0.6578947368421053, "acc_norm_stderr": 0.03860731599316092 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.55, "acc_stderr": 0.05, "acc_norm": 0.55, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6452830188679245, "acc_stderr": 0.029445175328199593, "acc_norm": 0.6452830188679245, "acc_norm_stderr": 0.029445175328199593 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7361111111111112, "acc_stderr": 0.03685651095897532, "acc_norm": 0.7361111111111112, "acc_norm_stderr": 0.03685651095897532 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6069364161849711, "acc_stderr": 0.0372424959581773, "acc_norm": 0.6069364161849711, "acc_norm_stderr": 0.0372424959581773 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287534, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287534 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.044084400227680794, "acc_norm": 0.74, "acc_norm_stderr": 0.044084400227680794 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5148936170212766, "acc_stderr": 0.03267151848924777, "acc_norm": 0.5148936170212766, "acc_norm_stderr": 0.03267151848924777 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5, "acc_stderr": 0.047036043419179864, "acc_norm": 0.5, "acc_norm_stderr": 0.047036043419179864 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5379310344827586, "acc_stderr": 0.04154659671707548, "acc_norm": 0.5379310344827586, "acc_norm_stderr": 0.04154659671707548 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4021164021164021, "acc_stderr": 0.025253032554997692, "acc_norm": 0.4021164021164021, "acc_norm_stderr": 0.025253032554997692 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.40476190476190477, "acc_stderr": 0.04390259265377562, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.04390259265377562 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7677419354838709, "acc_stderr": 0.024022256130308235, "acc_norm": 0.7677419354838709, "acc_norm_stderr": 0.024022256130308235 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5221674876847291, "acc_stderr": 0.03514528562175008, "acc_norm": 0.5221674876847291, "acc_norm_stderr": 0.03514528562175008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.63, "acc_stderr": 0.04852365870939099, "acc_norm": 0.63, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7636363636363637, "acc_stderr": 0.033175059300091805, "acc_norm": 0.7636363636363637, "acc_norm_stderr": 0.033175059300091805 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7676767676767676, "acc_stderr": 0.030088629490217487, "acc_norm": 0.7676767676767676, "acc_norm_stderr": 0.030088629490217487 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8808290155440415, "acc_stderr": 0.023381935348121434, "acc_norm": 0.8808290155440415, "acc_norm_stderr": 0.023381935348121434 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6102564102564103, "acc_stderr": 0.024726967886647074, "acc_norm": 0.6102564102564103, "acc_norm_stderr": 0.024726967886647074 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3296296296296296, "acc_stderr": 0.028661201116524586, "acc_norm": 0.3296296296296296, "acc_norm_stderr": 0.028661201116524586 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7016806722689075, "acc_stderr": 0.029719142876342856, "acc_norm": 0.7016806722689075, "acc_norm_stderr": 0.029719142876342856 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.038227469376587525, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.038227469376587525 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8275229357798165, "acc_stderr": 0.016197807956848047, "acc_norm": 0.8275229357798165, "acc_norm_stderr": 0.016197807956848047 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4583333333333333, "acc_stderr": 0.03398110890294636, "acc_norm": 0.4583333333333333, "acc_norm_stderr": 0.03398110890294636 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7990196078431373, "acc_stderr": 0.028125972265654373, "acc_norm": 0.7990196078431373, "acc_norm_stderr": 0.028125972265654373 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7848101265822784, "acc_stderr": 0.026750826994676166, "acc_norm": 0.7848101265822784, "acc_norm_stderr": 0.026750826994676166 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6457399103139013, "acc_stderr": 0.032100621541349864, "acc_norm": 0.6457399103139013, "acc_norm_stderr": 0.032100621541349864 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7099236641221374, "acc_stderr": 0.03980066246467765, "acc_norm": 0.7099236641221374, "acc_norm_stderr": 0.03980066246467765 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7603305785123967, "acc_stderr": 0.03896878985070416, "acc_norm": 0.7603305785123967, "acc_norm_stderr": 0.03896878985070416 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.75, "acc_stderr": 0.04186091791394607, "acc_norm": 0.75, "acc_norm_stderr": 0.04186091791394607 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7914110429447853, "acc_stderr": 0.03192193448934724, "acc_norm": 0.7914110429447853, "acc_norm_stderr": 0.03192193448934724 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4017857142857143, "acc_stderr": 0.04653333146973646, "acc_norm": 0.4017857142857143, "acc_norm_stderr": 0.04653333146973646 }, "harness|hendrycksTest-management|5": { "acc": 0.7184466019417476, "acc_stderr": 0.04453254836326466, "acc_norm": 0.7184466019417476, "acc_norm_stderr": 0.04453254836326466 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8675213675213675, "acc_stderr": 0.022209309073165616, "acc_norm": 0.8675213675213675, "acc_norm_stderr": 0.022209309073165616 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.65, "acc_stderr": 0.047937248544110196, "acc_norm": 0.65, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7956577266922095, "acc_stderr": 0.0144191239809319, "acc_norm": 0.7956577266922095, "acc_norm_stderr": 0.0144191239809319 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7138728323699421, "acc_stderr": 0.02433214677913413, "acc_norm": 0.7138728323699421, "acc_norm_stderr": 0.02433214677913413 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.27039106145251396, "acc_stderr": 0.014854993938010071, "acc_norm": 0.27039106145251396, "acc_norm_stderr": 0.014854993938010071 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7222222222222222, "acc_stderr": 0.025646863097137908, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.025646863097137908 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6816720257234726, "acc_stderr": 0.026457225067811025, "acc_norm": 0.6816720257234726, "acc_norm_stderr": 0.026457225067811025 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7098765432098766, "acc_stderr": 0.025251173936495036, "acc_norm": 0.7098765432098766, "acc_norm_stderr": 0.025251173936495036 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4574468085106383, "acc_stderr": 0.029719281272236837, "acc_norm": 0.4574468085106383, "acc_norm_stderr": 0.029719281272236837 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.46153846153846156, "acc_stderr": 0.012732398286190442, "acc_norm": 0.46153846153846156, "acc_norm_stderr": 0.012732398286190442 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6544117647058824, "acc_stderr": 0.028888193103988633, "acc_norm": 0.6544117647058824, "acc_norm_stderr": 0.028888193103988633 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6356209150326797, "acc_stderr": 0.0194695182215737, "acc_norm": 0.6356209150326797, "acc_norm_stderr": 0.0194695182215737 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6363636363636364, "acc_stderr": 0.04607582090719976, "acc_norm": 0.6363636363636364, "acc_norm_stderr": 0.04607582090719976 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6816326530612244, "acc_stderr": 0.029822533793982066, "acc_norm": 0.6816326530612244, "acc_norm_stderr": 0.029822533793982066 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8009950248756219, "acc_stderr": 0.028231365092758406, "acc_norm": 0.8009950248756219, "acc_norm_stderr": 0.028231365092758406 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.0348735088019777, "acc_norm": 0.86, "acc_norm_stderr": 0.0348735088019777 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.03889951252827216, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.03889951252827216 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8128654970760234, "acc_stderr": 0.02991312723236804, "acc_norm": 0.8128654970760234, "acc_norm_stderr": 0.02991312723236804 }, "harness|truthfulqa:mc|0": { "mc1": 0.2974296205630355, "mc1_stderr": 0.01600265148736101, "mc2": 0.45839650469314347, "mc2_stderr": 0.014589308437993127 }, "harness|winogrande|5": { "acc": 0.7719021310181531, "acc_stderr": 0.011793015817663595 }, "harness|gsm8k|5": { "acc": 0.30022744503411675, "acc_stderr": 0.012625423152283034 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_manishiitg__open-aditi-hi-v2
[ "region:us" ]
2024-02-09T12:33:41+00:00
{"pretty_name": "Evaluation run of manishiitg/open-aditi-hi-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [manishiitg/open-aditi-hi-v2](https://huggingface.co/manishiitg/open-aditi-hi-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_manishiitg__open-aditi-hi-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T12:31:28.234042](https://huggingface.co/datasets/open-llm-leaderboard/details_manishiitg__open-aditi-hi-v2/blob/main/results_2024-02-09T12-31-28.234042.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6107466762705897,\n \"acc_stderr\": 0.032739376568698436,\n \"acc_norm\": 0.6172844425941606,\n \"acc_norm_stderr\": 0.0334154029479238,\n \"mc1\": 0.2974296205630355,\n \"mc1_stderr\": 0.01600265148736101,\n \"mc2\": 0.45839650469314347,\n \"mc2_stderr\": 0.014589308437993127\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5554607508532423,\n \"acc_stderr\": 0.01452122640562708,\n \"acc_norm\": 0.5938566552901023,\n \"acc_norm_stderr\": 0.014351656690097863\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.622087233618801,\n \"acc_stderr\": 0.004838747305783345,\n \"acc_norm\": 0.8200557657837084,\n \"acc_norm_stderr\": 0.0038335592281586663\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.042849586397534015,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.042849586397534015\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6452830188679245,\n \"acc_stderr\": 0.029445175328199593,\n \"acc_norm\": 0.6452830188679245,\n \"acc_norm_stderr\": 0.029445175328199593\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.0372424959581773,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.0372424959581773\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.03267151848924777,\n \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.03267151848924777\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4021164021164021,\n \"acc_stderr\": 0.025253032554997692,\n \"acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.025253032554997692\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7677419354838709,\n \"acc_stderr\": 0.024022256130308235,\n \"acc_norm\": 0.7677419354838709,\n \"acc_norm_stderr\": 0.024022256130308235\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.033175059300091805,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.033175059300091805\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217487,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217487\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.023381935348121434,\n \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.023381935348121434\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6102564102564103,\n \"acc_stderr\": 0.024726967886647074,\n \"acc_norm\": 0.6102564102564103,\n \"acc_norm_stderr\": 0.024726967886647074\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524586,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524586\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342856,\n \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342856\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8275229357798165,\n \"acc_stderr\": 0.016197807956848047,\n \"acc_norm\": 0.8275229357798165,\n \"acc_norm_stderr\": 0.016197807956848047\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7990196078431373,\n \"acc_stderr\": 0.028125972265654373,\n \"acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.028125972265654373\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676166,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676166\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6457399103139013,\n \"acc_stderr\": 0.032100621541349864,\n \"acc_norm\": 0.6457399103139013,\n \"acc_norm_stderr\": 0.032100621541349864\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7099236641221374,\n \"acc_stderr\": 0.03980066246467765,\n \"acc_norm\": 0.7099236641221374,\n \"acc_norm_stderr\": 0.03980066246467765\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.03192193448934724,\n \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.03192193448934724\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.04453254836326466,\n \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.04453254836326466\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7956577266922095,\n \"acc_stderr\": 0.0144191239809319,\n \"acc_norm\": 0.7956577266922095,\n \"acc_norm_stderr\": 0.0144191239809319\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7138728323699421,\n \"acc_stderr\": 0.02433214677913413,\n \"acc_norm\": 0.7138728323699421,\n \"acc_norm_stderr\": 0.02433214677913413\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.27039106145251396,\n \"acc_stderr\": 0.014854993938010071,\n \"acc_norm\": 0.27039106145251396,\n \"acc_norm_stderr\": 0.014854993938010071\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.025646863097137908,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.025646863097137908\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6816720257234726,\n \"acc_stderr\": 0.026457225067811025,\n \"acc_norm\": 0.6816720257234726,\n \"acc_norm_stderr\": 0.026457225067811025\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495036,\n \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495036\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4574468085106383,\n \"acc_stderr\": 0.029719281272236837,\n \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.029719281272236837\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46153846153846156,\n \"acc_stderr\": 0.012732398286190442,\n \"acc_norm\": 0.46153846153846156,\n \"acc_norm_stderr\": 0.012732398286190442\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.028888193103988633,\n \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.028888193103988633\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6356209150326797,\n \"acc_stderr\": 0.0194695182215737,\n \"acc_norm\": 0.6356209150326797,\n \"acc_norm_stderr\": 0.0194695182215737\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6816326530612244,\n \"acc_stderr\": 0.029822533793982066,\n \"acc_norm\": 0.6816326530612244,\n \"acc_norm_stderr\": 0.029822533793982066\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8009950248756219,\n \"acc_stderr\": 0.028231365092758406,\n \"acc_norm\": 0.8009950248756219,\n \"acc_norm_stderr\": 0.028231365092758406\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2974296205630355,\n \"mc1_stderr\": 0.01600265148736101,\n \"mc2\": 0.45839650469314347,\n \"mc2_stderr\": 0.014589308437993127\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.011793015817663595\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.30022744503411675,\n \"acc_stderr\": 0.012625423152283034\n }\n}\n```", "repo_url": "https://huggingface.co/manishiitg/open-aditi-hi-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-31-28.234042.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["**/details_harness|winogrande|5_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T12-31-28.234042.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T12_31_28.234042", "path": ["results_2024-02-09T12-31-28.234042.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T12-31-28.234042.parquet"]}]}]}
2024-02-09T12:34:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of manishiitg/open-aditi-hi-v2 Dataset automatically created during the evaluation run of model manishiitg/open-aditi-hi-v2 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T12:31:28.234042(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of manishiitg/open-aditi-hi-v2\n\n\n\nDataset automatically created during the evaluation run of model manishiitg/open-aditi-hi-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:31:28.234042(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of manishiitg/open-aditi-hi-v2\n\n\n\nDataset automatically created during the evaluation run of model manishiitg/open-aditi-hi-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:31:28.234042(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
ce5efafce97e3fb05b4383b7b823997441748877
# Dataset Card for Evaluation run of Josephgflowers/3BigReasonCinder <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Josephgflowers/3BigReasonCinder](https://huggingface.co/Josephgflowers/3BigReasonCinder) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Josephgflowers__3BigReasonCinder", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T12:31:38.090504](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__3BigReasonCinder/blob/main/results_2024-02-09T12-31-38.090504.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.44794266994933396, "acc_stderr": 0.03464503770381712, "acc_norm": 0.4507932379135084, "acc_norm_stderr": 0.03538213666564797, "mc1": 0.2876376988984088, "mc1_stderr": 0.015846315101394816, "mc2": 0.44764087589469737, "mc2_stderr": 0.014703779857331185 }, "harness|arc:challenge|25": { "acc": 0.39078498293515357, "acc_stderr": 0.014258563880513778, "acc_norm": 0.41723549488054607, "acc_norm_stderr": 0.014409825518403082 }, "harness|hellaswag|10": { "acc": 0.4801832304321848, "acc_stderr": 0.004985860853427632, "acc_norm": 0.6515634335789683, "acc_norm_stderr": 0.004755013243022131 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4222222222222222, "acc_stderr": 0.04266763404099582, "acc_norm": 0.4222222222222222, "acc_norm_stderr": 0.04266763404099582 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.4342105263157895, "acc_stderr": 0.040335656678483184, "acc_norm": 0.4342105263157895, "acc_norm_stderr": 0.040335656678483184 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5094339622641509, "acc_stderr": 0.030767394707808093, "acc_norm": 0.5094339622641509, "acc_norm_stderr": 0.030767394707808093 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.4722222222222222, "acc_stderr": 0.04174752578923185, "acc_norm": 0.4722222222222222, "acc_norm_stderr": 0.04174752578923185 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.31, "acc_stderr": 0.046482319871173156, "acc_norm": 0.31, "acc_norm_stderr": 0.046482319871173156 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.047258156262526045, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526045 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.3930635838150289, "acc_stderr": 0.03724249595817729, "acc_norm": 0.3930635838150289, "acc_norm_stderr": 0.03724249595817729 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.21568627450980393, "acc_stderr": 0.04092563958237654, "acc_norm": 0.21568627450980393, "acc_norm_stderr": 0.04092563958237654 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.37446808510638296, "acc_stderr": 0.03163910665367291, "acc_norm": 0.37446808510638296, "acc_norm_stderr": 0.03163910665367291 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.23684210526315788, "acc_stderr": 0.039994238792813344, "acc_norm": 0.23684210526315788, "acc_norm_stderr": 0.039994238792813344 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.4689655172413793, "acc_stderr": 0.04158632762097828, "acc_norm": 0.4689655172413793, "acc_norm_stderr": 0.04158632762097828 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30158730158730157, "acc_stderr": 0.0236369759961018, "acc_norm": 0.30158730158730157, "acc_norm_stderr": 0.0236369759961018 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.2619047619047619, "acc_stderr": 0.0393253768039287, "acc_norm": 0.2619047619047619, "acc_norm_stderr": 0.0393253768039287 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5096774193548387, "acc_stderr": 0.02843867799890955, "acc_norm": 0.5096774193548387, "acc_norm_stderr": 0.02843867799890955 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.3793103448275862, "acc_stderr": 0.03413963805906235, "acc_norm": 0.3793103448275862, "acc_norm_stderr": 0.03413963805906235 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.46, "acc_stderr": 0.05009082659620332, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.5454545454545454, "acc_stderr": 0.03888176921674101, "acc_norm": 0.5454545454545454, "acc_norm_stderr": 0.03888176921674101 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.5858585858585859, "acc_stderr": 0.03509438348879629, "acc_norm": 0.5858585858585859, "acc_norm_stderr": 0.03509438348879629 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.5699481865284974, "acc_stderr": 0.03572954333144808, "acc_norm": 0.5699481865284974, "acc_norm_stderr": 0.03572954333144808 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.3974358974358974, "acc_stderr": 0.024811920017903836, "acc_norm": 0.3974358974358974, "acc_norm_stderr": 0.024811920017903836 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.23333333333333334, "acc_stderr": 0.02578787422095931, "acc_norm": 0.23333333333333334, "acc_norm_stderr": 0.02578787422095931 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.4411764705882353, "acc_stderr": 0.0322529423239964, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.0322529423239964 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33112582781456956, "acc_stderr": 0.038425817186598696, "acc_norm": 0.33112582781456956, "acc_norm_stderr": 0.038425817186598696 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.6036697247706422, "acc_stderr": 0.02097146994790053, "acc_norm": 0.6036697247706422, "acc_norm_stderr": 0.02097146994790053 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.28703703703703703, "acc_stderr": 0.030851992993257013, "acc_norm": 0.28703703703703703, "acc_norm_stderr": 0.030851992993257013 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.5490196078431373, "acc_stderr": 0.03492406104163613, "acc_norm": 0.5490196078431373, "acc_norm_stderr": 0.03492406104163613 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.6075949367088608, "acc_stderr": 0.03178471874564729, "acc_norm": 0.6075949367088608, "acc_norm_stderr": 0.03178471874564729 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.47085201793721976, "acc_stderr": 0.03350073248773404, "acc_norm": 0.47085201793721976, "acc_norm_stderr": 0.03350073248773404 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.5648854961832062, "acc_stderr": 0.043482080516448585, "acc_norm": 0.5648854961832062, "acc_norm_stderr": 0.043482080516448585 }, "harness|hendrycksTest-international_law|5": { "acc": 0.628099173553719, "acc_stderr": 0.04412015806624505, "acc_norm": 0.628099173553719, "acc_norm_stderr": 0.04412015806624505 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.5555555555555556, "acc_stderr": 0.04803752235190192, "acc_norm": 0.5555555555555556, "acc_norm_stderr": 0.04803752235190192 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.5276073619631901, "acc_stderr": 0.0392237829061099, "acc_norm": 0.5276073619631901, "acc_norm_stderr": 0.0392237829061099 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4107142857142857, "acc_stderr": 0.046695106638751906, "acc_norm": 0.4107142857142857, "acc_norm_stderr": 0.046695106638751906 }, "harness|hendrycksTest-management|5": { "acc": 0.5631067961165048, "acc_stderr": 0.049111471073657764, "acc_norm": 0.5631067961165048, "acc_norm_stderr": 0.049111471073657764 }, "harness|hendrycksTest-marketing|5": { "acc": 0.6623931623931624, "acc_stderr": 0.030980296992618558, "acc_norm": 0.6623931623931624, "acc_norm_stderr": 0.030980296992618558 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.5389527458492975, "acc_stderr": 0.017825621793239012, "acc_norm": 0.5389527458492975, "acc_norm_stderr": 0.017825621793239012 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.4884393063583815, "acc_stderr": 0.026911898686377913, "acc_norm": 0.4884393063583815, "acc_norm_stderr": 0.026911898686377913 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.28044692737430166, "acc_stderr": 0.015024083883322891, "acc_norm": 0.28044692737430166, "acc_norm_stderr": 0.015024083883322891 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.4803921568627451, "acc_stderr": 0.028607893699576066, "acc_norm": 0.4803921568627451, "acc_norm_stderr": 0.028607893699576066 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.49517684887459806, "acc_stderr": 0.028396770444111298, "acc_norm": 0.49517684887459806, "acc_norm_stderr": 0.028396770444111298 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.4444444444444444, "acc_stderr": 0.027648477877413327, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.027648477877413327 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.3829787234042553, "acc_stderr": 0.028999080904806178, "acc_norm": 0.3829787234042553, "acc_norm_stderr": 0.028999080904806178 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.3272490221642764, "acc_stderr": 0.01198381980646473, "acc_norm": 0.3272490221642764, "acc_norm_stderr": 0.01198381980646473 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.33455882352941174, "acc_stderr": 0.028661996202335307, "acc_norm": 0.33455882352941174, "acc_norm_stderr": 0.028661996202335307 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.41830065359477125, "acc_stderr": 0.019955975145835542, "acc_norm": 0.41830065359477125, "acc_norm_stderr": 0.019955975145835542 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.509090909090909, "acc_stderr": 0.04788339768702861, "acc_norm": 0.509090909090909, "acc_norm_stderr": 0.04788339768702861 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5346938775510204, "acc_stderr": 0.03193207024425314, "acc_norm": 0.5346938775510204, "acc_norm_stderr": 0.03193207024425314 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6567164179104478, "acc_stderr": 0.03357379665433431, "acc_norm": 0.6567164179104478, "acc_norm_stderr": 0.03357379665433431 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.6, "acc_stderr": 0.049236596391733084, "acc_norm": 0.6, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-virology|5": { "acc": 0.4457831325301205, "acc_stderr": 0.03869543323472101, "acc_norm": 0.4457831325301205, "acc_norm_stderr": 0.03869543323472101 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.5263157894736842, "acc_stderr": 0.038295098689947266, "acc_norm": 0.5263157894736842, "acc_norm_stderr": 0.038295098689947266 }, "harness|truthfulqa:mc|0": { "mc1": 0.2876376988984088, "mc1_stderr": 0.015846315101394816, "mc2": 0.44764087589469737, "mc2_stderr": 0.014703779857331185 }, "harness|winogrande|5": { "acc": 0.6495659037095501, "acc_stderr": 0.01340904767667018 }, "harness|gsm8k|5": { "acc": 0.2759666413949962, "acc_stderr": 0.012312603010427352 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Josephgflowers__3BigReasonCinder
[ "region:us" ]
2024-02-09T12:34:02+00:00
{"pretty_name": "Evaluation run of Josephgflowers/3BigReasonCinder", "dataset_summary": "Dataset automatically created during the evaluation run of model [Josephgflowers/3BigReasonCinder](https://huggingface.co/Josephgflowers/3BigReasonCinder) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Josephgflowers__3BigReasonCinder\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T12:31:38.090504](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__3BigReasonCinder/blob/main/results_2024-02-09T12-31-38.090504.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.44794266994933396,\n \"acc_stderr\": 0.03464503770381712,\n \"acc_norm\": 0.4507932379135084,\n \"acc_norm_stderr\": 0.03538213666564797,\n \"mc1\": 0.2876376988984088,\n \"mc1_stderr\": 0.015846315101394816,\n \"mc2\": 0.44764087589469737,\n \"mc2_stderr\": 0.014703779857331185\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.39078498293515357,\n \"acc_stderr\": 0.014258563880513778,\n \"acc_norm\": 0.41723549488054607,\n \"acc_norm_stderr\": 0.014409825518403082\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4801832304321848,\n \"acc_stderr\": 0.004985860853427632,\n \"acc_norm\": 0.6515634335789683,\n \"acc_norm_stderr\": 0.004755013243022131\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4222222222222222,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.4222222222222222,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4342105263157895,\n \"acc_stderr\": 0.040335656678483184,\n \"acc_norm\": 0.4342105263157895,\n \"acc_norm_stderr\": 0.040335656678483184\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5094339622641509,\n \"acc_stderr\": 0.030767394707808093,\n \"acc_norm\": 0.5094339622641509,\n \"acc_norm_stderr\": 0.030767394707808093\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.04174752578923185,\n \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.04174752578923185\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.046482319871173156,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.046482319871173156\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3930635838150289,\n \"acc_stderr\": 0.03724249595817729,\n \"acc_norm\": 0.3930635838150289,\n \"acc_norm_stderr\": 0.03724249595817729\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.37446808510638296,\n \"acc_stderr\": 0.03163910665367291,\n \"acc_norm\": 0.37446808510638296,\n \"acc_norm_stderr\": 0.03163910665367291\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.039994238792813344,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.039994238792813344\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30158730158730157,\n \"acc_stderr\": 0.0236369759961018,\n \"acc_norm\": 0.30158730158730157,\n \"acc_norm_stderr\": 0.0236369759961018\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2619047619047619,\n \"acc_stderr\": 0.0393253768039287,\n \"acc_norm\": 0.2619047619047619,\n \"acc_norm_stderr\": 0.0393253768039287\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5096774193548387,\n \"acc_stderr\": 0.02843867799890955,\n \"acc_norm\": 0.5096774193548387,\n \"acc_norm_stderr\": 0.02843867799890955\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3793103448275862,\n \"acc_stderr\": 0.03413963805906235,\n \"acc_norm\": 0.3793103448275862,\n \"acc_norm_stderr\": 0.03413963805906235\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.5454545454545454,\n \"acc_stderr\": 0.03888176921674101,\n \"acc_norm\": 0.5454545454545454,\n \"acc_norm_stderr\": 0.03888176921674101\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5858585858585859,\n \"acc_stderr\": 0.03509438348879629,\n \"acc_norm\": 0.5858585858585859,\n \"acc_norm_stderr\": 0.03509438348879629\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.5699481865284974,\n \"acc_stderr\": 0.03572954333144808,\n \"acc_norm\": 0.5699481865284974,\n \"acc_norm_stderr\": 0.03572954333144808\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.3974358974358974,\n \"acc_stderr\": 0.024811920017903836,\n \"acc_norm\": 0.3974358974358974,\n \"acc_norm_stderr\": 0.024811920017903836\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.23333333333333334,\n \"acc_stderr\": 0.02578787422095931,\n \"acc_norm\": 0.23333333333333334,\n \"acc_norm_stderr\": 0.02578787422095931\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.0322529423239964,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.0322529423239964\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6036697247706422,\n \"acc_stderr\": 0.02097146994790053,\n \"acc_norm\": 0.6036697247706422,\n \"acc_norm_stderr\": 0.02097146994790053\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.28703703703703703,\n \"acc_stderr\": 0.030851992993257013,\n \"acc_norm\": 0.28703703703703703,\n \"acc_norm_stderr\": 0.030851992993257013\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5490196078431373,\n \"acc_stderr\": 0.03492406104163613,\n \"acc_norm\": 0.5490196078431373,\n \"acc_norm_stderr\": 0.03492406104163613\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6075949367088608,\n \"acc_stderr\": 0.03178471874564729,\n \"acc_norm\": 0.6075949367088608,\n \"acc_norm_stderr\": 0.03178471874564729\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.47085201793721976,\n \"acc_stderr\": 0.03350073248773404,\n \"acc_norm\": 0.47085201793721976,\n \"acc_norm_stderr\": 0.03350073248773404\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5648854961832062,\n \"acc_stderr\": 0.043482080516448585,\n \"acc_norm\": 0.5648854961832062,\n \"acc_norm_stderr\": 0.043482080516448585\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.628099173553719,\n \"acc_stderr\": 0.04412015806624505,\n \"acc_norm\": 0.628099173553719,\n \"acc_norm_stderr\": 0.04412015806624505\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04803752235190192,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04803752235190192\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5276073619631901,\n \"acc_stderr\": 0.0392237829061099,\n \"acc_norm\": 0.5276073619631901,\n \"acc_norm_stderr\": 0.0392237829061099\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n \"acc_stderr\": 0.046695106638751906,\n \"acc_norm\": 0.4107142857142857,\n \"acc_norm_stderr\": 0.046695106638751906\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.5631067961165048,\n \"acc_stderr\": 0.049111471073657764,\n \"acc_norm\": 0.5631067961165048,\n \"acc_norm_stderr\": 0.049111471073657764\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6623931623931624,\n \"acc_stderr\": 0.030980296992618558,\n \"acc_norm\": 0.6623931623931624,\n \"acc_norm_stderr\": 0.030980296992618558\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5389527458492975,\n \"acc_stderr\": 0.017825621793239012,\n \"acc_norm\": 0.5389527458492975,\n \"acc_norm_stderr\": 0.017825621793239012\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.4884393063583815,\n \"acc_stderr\": 0.026911898686377913,\n \"acc_norm\": 0.4884393063583815,\n \"acc_norm_stderr\": 0.026911898686377913\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.28044692737430166,\n \"acc_stderr\": 0.015024083883322891,\n \"acc_norm\": 0.28044692737430166,\n \"acc_norm_stderr\": 0.015024083883322891\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.4803921568627451,\n \"acc_stderr\": 0.028607893699576066,\n \"acc_norm\": 0.4803921568627451,\n \"acc_norm_stderr\": 0.028607893699576066\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.49517684887459806,\n \"acc_stderr\": 0.028396770444111298,\n \"acc_norm\": 0.49517684887459806,\n \"acc_norm_stderr\": 0.028396770444111298\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.027648477877413327,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.027648477877413327\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3829787234042553,\n \"acc_stderr\": 0.028999080904806178,\n \"acc_norm\": 0.3829787234042553,\n \"acc_norm_stderr\": 0.028999080904806178\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3272490221642764,\n \"acc_stderr\": 0.01198381980646473,\n \"acc_norm\": 0.3272490221642764,\n \"acc_norm_stderr\": 0.01198381980646473\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.33455882352941174,\n \"acc_stderr\": 0.028661996202335307,\n \"acc_norm\": 0.33455882352941174,\n \"acc_norm_stderr\": 0.028661996202335307\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.41830065359477125,\n \"acc_stderr\": 0.019955975145835542,\n \"acc_norm\": 0.41830065359477125,\n \"acc_norm_stderr\": 0.019955975145835542\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.509090909090909,\n \"acc_stderr\": 0.04788339768702861,\n \"acc_norm\": 0.509090909090909,\n \"acc_norm_stderr\": 0.04788339768702861\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5346938775510204,\n \"acc_stderr\": 0.03193207024425314,\n \"acc_norm\": 0.5346938775510204,\n \"acc_norm_stderr\": 0.03193207024425314\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6567164179104478,\n \"acc_stderr\": 0.03357379665433431,\n \"acc_norm\": 0.6567164179104478,\n \"acc_norm_stderr\": 0.03357379665433431\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4457831325301205,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.4457831325301205,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.5263157894736842,\n \"acc_stderr\": 0.038295098689947266,\n \"acc_norm\": 0.5263157894736842,\n \"acc_norm_stderr\": 0.038295098689947266\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2876376988984088,\n \"mc1_stderr\": 0.015846315101394816,\n \"mc2\": 0.44764087589469737,\n \"mc2_stderr\": 0.014703779857331185\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6495659037095501,\n \"acc_stderr\": 0.01340904767667018\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2759666413949962,\n \"acc_stderr\": 0.012312603010427352\n }\n}\n```", "repo_url": "https://huggingface.co/Josephgflowers/3BigReasonCinder", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-31-38.090504.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["**/details_harness|winogrande|5_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T12-31-38.090504.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T12_31_38.090504", "path": ["results_2024-02-09T12-31-38.090504.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T12-31-38.090504.parquet"]}]}]}
2024-02-09T12:34:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Josephgflowers/3BigReasonCinder Dataset automatically created during the evaluation run of model Josephgflowers/3BigReasonCinder on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T12:31:38.090504(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Josephgflowers/3BigReasonCinder\n\n\n\nDataset automatically created during the evaluation run of model Josephgflowers/3BigReasonCinder on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:31:38.090504(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Josephgflowers/3BigReasonCinder\n\n\n\nDataset automatically created during the evaluation run of model Josephgflowers/3BigReasonCinder on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:31:38.090504(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
3cb40d6757520d24b21313fc65a93cbfb21f4572
# Dataset Card for Evaluation run of Technoculture/MT7Bi-alpha-dpo-v0.2 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Technoculture/MT7Bi-alpha-dpo-v0.2](https://huggingface.co/Technoculture/MT7Bi-alpha-dpo-v0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Technoculture__MT7Bi-alpha-dpo-v0.2", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T12:50:13.790724](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__MT7Bi-alpha-dpo-v0.2/blob/main/results_2024-02-09T12-50-13.790724.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5274798937736077, "acc_stderr": 0.034244329313021585, "acc_norm": 0.5324573781856667, "acc_norm_stderr": 0.034973478659411146, "mc1": 0.29008567931456547, "mc1_stderr": 0.01588623687420952, "mc2": 0.45484028768936574, "mc2_stderr": 0.015178684073869702 }, "harness|arc:challenge|25": { "acc": 0.5204778156996587, "acc_stderr": 0.01459913135303501, "acc_norm": 0.5469283276450512, "acc_norm_stderr": 0.014546892052005628 }, "harness|hellaswag|10": { "acc": 0.5714997012547302, "acc_stderr": 0.0049385003039902845, "acc_norm": 0.7589125672176857, "acc_norm_stderr": 0.004268690572638815 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.26, "acc_stderr": 0.0440844002276808, "acc_norm": 0.26, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5185185185185185, "acc_stderr": 0.043163785995113245, "acc_norm": 0.5185185185185185, "acc_norm_stderr": 0.043163785995113245 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.47368421052631576, "acc_stderr": 0.04063302731486671, "acc_norm": 0.47368421052631576, "acc_norm_stderr": 0.04063302731486671 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6339622641509434, "acc_stderr": 0.029647813539365242, "acc_norm": 0.6339622641509434, "acc_norm_stderr": 0.029647813539365242 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5833333333333334, "acc_stderr": 0.04122728707651282, "acc_norm": 0.5833333333333334, "acc_norm_stderr": 0.04122728707651282 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.04688261722621505, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621505 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.48554913294797686, "acc_stderr": 0.03810871630454764, "acc_norm": 0.48554913294797686, "acc_norm_stderr": 0.03810871630454764 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3235294117647059, "acc_stderr": 0.04655010411319616, "acc_norm": 0.3235294117647059, "acc_norm_stderr": 0.04655010411319616 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.61, "acc_stderr": 0.04902071300001975, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4553191489361702, "acc_stderr": 0.032555253593403555, "acc_norm": 0.4553191489361702, "acc_norm_stderr": 0.032555253593403555 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.35964912280701755, "acc_stderr": 0.045144961328736334, "acc_norm": 0.35964912280701755, "acc_norm_stderr": 0.045144961328736334 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.4689655172413793, "acc_stderr": 0.04158632762097828, "acc_norm": 0.4689655172413793, "acc_norm_stderr": 0.04158632762097828 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30423280423280424, "acc_stderr": 0.023695415009463087, "acc_norm": 0.30423280423280424, "acc_norm_stderr": 0.023695415009463087 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.31746031746031744, "acc_stderr": 0.04163453031302859, "acc_norm": 0.31746031746031744, "acc_norm_stderr": 0.04163453031302859 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.04793724854411022, "acc_norm": 0.35, "acc_norm_stderr": 0.04793724854411022 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5741935483870968, "acc_stderr": 0.028129112709165904, "acc_norm": 0.5741935483870968, "acc_norm_stderr": 0.028129112709165904 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.45320197044334976, "acc_stderr": 0.03502544650845872, "acc_norm": 0.45320197044334976, "acc_norm_stderr": 0.03502544650845872 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7272727272727273, "acc_stderr": 0.0347769116216366, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.0347769116216366 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.6565656565656566, "acc_stderr": 0.03383201223244441, "acc_norm": 0.6565656565656566, "acc_norm_stderr": 0.03383201223244441 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7202072538860104, "acc_stderr": 0.032396370467357036, "acc_norm": 0.7202072538860104, "acc_norm_stderr": 0.032396370467357036 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.4897435897435897, "acc_stderr": 0.025345672221942374, "acc_norm": 0.4897435897435897, "acc_norm_stderr": 0.025345672221942374 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2814814814814815, "acc_stderr": 0.027420019350945284, "acc_norm": 0.2814814814814815, "acc_norm_stderr": 0.027420019350945284 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5168067226890757, "acc_stderr": 0.03246013680375308, "acc_norm": 0.5168067226890757, "acc_norm_stderr": 0.03246013680375308 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.31125827814569534, "acc_stderr": 0.037804458505267334, "acc_norm": 0.31125827814569534, "acc_norm_stderr": 0.037804458505267334 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7247706422018348, "acc_stderr": 0.019149093743155203, "acc_norm": 0.7247706422018348, "acc_norm_stderr": 0.019149093743155203 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4166666666666667, "acc_stderr": 0.03362277436608043, "acc_norm": 0.4166666666666667, "acc_norm_stderr": 0.03362277436608043 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.6666666666666666, "acc_stderr": 0.03308611113236436, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.03308611113236436 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7510548523206751, "acc_stderr": 0.028146970599422644, "acc_norm": 0.7510548523206751, "acc_norm_stderr": 0.028146970599422644 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.5964125560538116, "acc_stderr": 0.03292802819330314, "acc_norm": 0.5964125560538116, "acc_norm_stderr": 0.03292802819330314 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6106870229007634, "acc_stderr": 0.04276486542814591, "acc_norm": 0.6106870229007634, "acc_norm_stderr": 0.04276486542814591 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6528925619834711, "acc_stderr": 0.04345724570292534, "acc_norm": 0.6528925619834711, "acc_norm_stderr": 0.04345724570292534 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6203703703703703, "acc_stderr": 0.04691521224077742, "acc_norm": 0.6203703703703703, "acc_norm_stderr": 0.04691521224077742 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6257668711656442, "acc_stderr": 0.03802068102899615, "acc_norm": 0.6257668711656442, "acc_norm_stderr": 0.03802068102899615 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.41964285714285715, "acc_stderr": 0.04684099321077106, "acc_norm": 0.41964285714285715, "acc_norm_stderr": 0.04684099321077106 }, "harness|hendrycksTest-management|5": { "acc": 0.7475728155339806, "acc_stderr": 0.04301250399690878, "acc_norm": 0.7475728155339806, "acc_norm_stderr": 0.04301250399690878 }, "harness|hendrycksTest-marketing|5": { "acc": 0.7735042735042735, "acc_stderr": 0.027421007295392926, "acc_norm": 0.7735042735042735, "acc_norm_stderr": 0.027421007295392926 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.65, "acc_stderr": 0.047937248544110196, "acc_norm": 0.65, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7241379310344828, "acc_stderr": 0.01598281477469563, "acc_norm": 0.7241379310344828, "acc_norm_stderr": 0.01598281477469563 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6011560693641619, "acc_stderr": 0.026362437574546545, "acc_norm": 0.6011560693641619, "acc_norm_stderr": 0.026362437574546545 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2558659217877095, "acc_stderr": 0.014593620923210723, "acc_norm": 0.2558659217877095, "acc_norm_stderr": 0.014593620923210723 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6078431372549019, "acc_stderr": 0.027956046165424516, "acc_norm": 0.6078431372549019, "acc_norm_stderr": 0.027956046165424516 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5627009646302251, "acc_stderr": 0.0281739177617629, "acc_norm": 0.5627009646302251, "acc_norm_stderr": 0.0281739177617629 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5648148148148148, "acc_stderr": 0.02758600622160771, "acc_norm": 0.5648148148148148, "acc_norm_stderr": 0.02758600622160771 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.3546099290780142, "acc_stderr": 0.02853865002887864, "acc_norm": 0.3546099290780142, "acc_norm_stderr": 0.02853865002887864 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.38005215123859193, "acc_stderr": 0.012397328205137809, "acc_norm": 0.38005215123859193, "acc_norm_stderr": 0.012397328205137809 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6102941176470589, "acc_stderr": 0.029624663581159703, "acc_norm": 0.6102941176470589, "acc_norm_stderr": 0.029624663581159703 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5261437908496732, "acc_stderr": 0.020200164564804588, "acc_norm": 0.5261437908496732, "acc_norm_stderr": 0.020200164564804588 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6090909090909091, "acc_stderr": 0.046737523336702384, "acc_norm": 0.6090909090909091, "acc_norm_stderr": 0.046737523336702384 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6285714285714286, "acc_stderr": 0.03093285879278986, "acc_norm": 0.6285714285714286, "acc_norm_stderr": 0.03093285879278986 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6019900497512438, "acc_stderr": 0.03461199429040013, "acc_norm": 0.6019900497512438, "acc_norm_stderr": 0.03461199429040013 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-virology|5": { "acc": 0.45180722891566266, "acc_stderr": 0.038743715565879536, "acc_norm": 0.45180722891566266, "acc_norm_stderr": 0.038743715565879536 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.6491228070175439, "acc_stderr": 0.03660298834049163, "acc_norm": 0.6491228070175439, "acc_norm_stderr": 0.03660298834049163 }, "harness|truthfulqa:mc|0": { "mc1": 0.29008567931456547, "mc1_stderr": 0.01588623687420952, "mc2": 0.45484028768936574, "mc2_stderr": 0.015178684073869702 }, "harness|winogrande|5": { "acc": 0.7158642462509865, "acc_stderr": 0.01267539278677272 }, "harness|gsm8k|5": { "acc": 0.25928733889310085, "acc_stderr": 0.012071405369905506 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Technoculture__MT7Bi-alpha-dpo-v0.2
[ "region:us" ]
2024-02-09T12:52:01+00:00
{"pretty_name": "Evaluation run of Technoculture/MT7Bi-alpha-dpo-v0.2", "dataset_summary": "Dataset automatically created during the evaluation run of model [Technoculture/MT7Bi-alpha-dpo-v0.2](https://huggingface.co/Technoculture/MT7Bi-alpha-dpo-v0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Technoculture__MT7Bi-alpha-dpo-v0.2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T12:50:13.790724](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__MT7Bi-alpha-dpo-v0.2/blob/main/results_2024-02-09T12-50-13.790724.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5274798937736077,\n \"acc_stderr\": 0.034244329313021585,\n \"acc_norm\": 0.5324573781856667,\n \"acc_norm_stderr\": 0.034973478659411146,\n \"mc1\": 0.29008567931456547,\n \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.45484028768936574,\n \"mc2_stderr\": 0.015178684073869702\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5204778156996587,\n \"acc_stderr\": 0.01459913135303501,\n \"acc_norm\": 0.5469283276450512,\n \"acc_norm_stderr\": 0.014546892052005628\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5714997012547302,\n \"acc_stderr\": 0.0049385003039902845,\n \"acc_norm\": 0.7589125672176857,\n \"acc_norm_stderr\": 0.004268690572638815\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.04063302731486671,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.04063302731486671\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6339622641509434,\n \"acc_stderr\": 0.029647813539365242,\n \"acc_norm\": 0.6339622641509434,\n \"acc_norm_stderr\": 0.029647813539365242\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.04122728707651282,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.04122728707651282\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.48554913294797686,\n \"acc_stderr\": 0.03810871630454764,\n \"acc_norm\": 0.48554913294797686,\n \"acc_norm_stderr\": 0.03810871630454764\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.04655010411319616,\n \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.04655010411319616\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4553191489361702,\n \"acc_stderr\": 0.032555253593403555,\n \"acc_norm\": 0.4553191489361702,\n \"acc_norm_stderr\": 0.032555253593403555\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n \"acc_stderr\": 0.045144961328736334,\n \"acc_norm\": 0.35964912280701755,\n \"acc_norm_stderr\": 0.045144961328736334\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30423280423280424,\n \"acc_stderr\": 0.023695415009463087,\n \"acc_norm\": 0.30423280423280424,\n \"acc_norm_stderr\": 0.023695415009463087\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.31746031746031744,\n \"acc_stderr\": 0.04163453031302859,\n \"acc_norm\": 0.31746031746031744,\n \"acc_norm_stderr\": 0.04163453031302859\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411022,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411022\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5741935483870968,\n \"acc_stderr\": 0.028129112709165904,\n \"acc_norm\": 0.5741935483870968,\n \"acc_norm_stderr\": 0.028129112709165904\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.45320197044334976,\n \"acc_stderr\": 0.03502544650845872,\n \"acc_norm\": 0.45320197044334976,\n \"acc_norm_stderr\": 0.03502544650845872\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.0347769116216366,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.0347769116216366\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6565656565656566,\n \"acc_stderr\": 0.03383201223244441,\n \"acc_norm\": 0.6565656565656566,\n \"acc_norm_stderr\": 0.03383201223244441\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7202072538860104,\n \"acc_stderr\": 0.032396370467357036,\n \"acc_norm\": 0.7202072538860104,\n \"acc_norm_stderr\": 0.032396370467357036\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4897435897435897,\n \"acc_stderr\": 0.025345672221942374,\n \"acc_norm\": 0.4897435897435897,\n \"acc_norm_stderr\": 0.025345672221942374\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2814814814814815,\n \"acc_stderr\": 0.027420019350945284,\n \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.027420019350945284\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5168067226890757,\n \"acc_stderr\": 0.03246013680375308,\n \"acc_norm\": 0.5168067226890757,\n \"acc_norm_stderr\": 0.03246013680375308\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31125827814569534,\n \"acc_stderr\": 0.037804458505267334,\n \"acc_norm\": 0.31125827814569534,\n \"acc_norm_stderr\": 0.037804458505267334\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7247706422018348,\n \"acc_stderr\": 0.019149093743155203,\n \"acc_norm\": 0.7247706422018348,\n \"acc_norm_stderr\": 0.019149093743155203\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608043,\n \"acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608043\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03308611113236436,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03308611113236436\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5964125560538116,\n \"acc_stderr\": 0.03292802819330314,\n \"acc_norm\": 0.5964125560538116,\n \"acc_norm_stderr\": 0.03292802819330314\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6106870229007634,\n \"acc_stderr\": 0.04276486542814591,\n \"acc_norm\": 0.6106870229007634,\n \"acc_norm_stderr\": 0.04276486542814591\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6528925619834711,\n \"acc_stderr\": 0.04345724570292534,\n \"acc_norm\": 0.6528925619834711,\n \"acc_norm_stderr\": 0.04345724570292534\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6203703703703703,\n \"acc_stderr\": 0.04691521224077742,\n \"acc_norm\": 0.6203703703703703,\n \"acc_norm_stderr\": 0.04691521224077742\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6257668711656442,\n \"acc_stderr\": 0.03802068102899615,\n \"acc_norm\": 0.6257668711656442,\n \"acc_norm_stderr\": 0.03802068102899615\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7735042735042735,\n \"acc_stderr\": 0.027421007295392926,\n \"acc_norm\": 0.7735042735042735,\n \"acc_norm_stderr\": 0.027421007295392926\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7241379310344828,\n \"acc_stderr\": 0.01598281477469563,\n \"acc_norm\": 0.7241379310344828,\n \"acc_norm_stderr\": 0.01598281477469563\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.026362437574546545,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.026362437574546545\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2558659217877095,\n \"acc_stderr\": 0.014593620923210723,\n \"acc_norm\": 0.2558659217877095,\n \"acc_norm_stderr\": 0.014593620923210723\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6078431372549019,\n \"acc_stderr\": 0.027956046165424516,\n \"acc_norm\": 0.6078431372549019,\n \"acc_norm_stderr\": 0.027956046165424516\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5627009646302251,\n \"acc_stderr\": 0.0281739177617629,\n \"acc_norm\": 0.5627009646302251,\n \"acc_norm_stderr\": 0.0281739177617629\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5648148148148148,\n \"acc_stderr\": 0.02758600622160771,\n \"acc_norm\": 0.5648148148148148,\n \"acc_norm_stderr\": 0.02758600622160771\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3546099290780142,\n \"acc_stderr\": 0.02853865002887864,\n \"acc_norm\": 0.3546099290780142,\n \"acc_norm_stderr\": 0.02853865002887864\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.38005215123859193,\n \"acc_stderr\": 0.012397328205137809,\n \"acc_norm\": 0.38005215123859193,\n \"acc_norm_stderr\": 0.012397328205137809\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6102941176470589,\n \"acc_stderr\": 0.029624663581159703,\n \"acc_norm\": 0.6102941176470589,\n \"acc_norm_stderr\": 0.029624663581159703\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5261437908496732,\n \"acc_stderr\": 0.020200164564804588,\n \"acc_norm\": 0.5261437908496732,\n \"acc_norm_stderr\": 0.020200164564804588\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n \"acc_stderr\": 0.046737523336702384,\n \"acc_norm\": 0.6090909090909091,\n \"acc_norm_stderr\": 0.046737523336702384\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6285714285714286,\n \"acc_stderr\": 0.03093285879278986,\n \"acc_norm\": 0.6285714285714286,\n \"acc_norm_stderr\": 0.03093285879278986\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6019900497512438,\n \"acc_stderr\": 0.03461199429040013,\n \"acc_norm\": 0.6019900497512438,\n \"acc_norm_stderr\": 0.03461199429040013\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.45180722891566266,\n \"acc_stderr\": 0.038743715565879536,\n \"acc_norm\": 0.45180722891566266,\n \"acc_norm_stderr\": 0.038743715565879536\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.6491228070175439,\n \"acc_stderr\": 0.03660298834049163,\n \"acc_norm\": 0.6491228070175439,\n \"acc_norm_stderr\": 0.03660298834049163\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29008567931456547,\n \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.45484028768936574,\n \"mc2_stderr\": 0.015178684073869702\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7158642462509865,\n \"acc_stderr\": 0.01267539278677272\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.25928733889310085,\n \"acc_stderr\": 0.012071405369905506\n }\n}\n```", "repo_url": "https://huggingface.co/Technoculture/MT7Bi-alpha-dpo-v0.2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T12-50-13.790724.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["**/details_harness|winogrande|5_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T12-50-13.790724.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T12_50_13.790724", "path": ["results_2024-02-09T12-50-13.790724.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T12-50-13.790724.parquet"]}]}]}
2024-02-09T12:52:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Technoculture/MT7Bi-alpha-dpo-v0.2 Dataset automatically created during the evaluation run of model Technoculture/MT7Bi-alpha-dpo-v0.2 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T12:50:13.790724(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Technoculture/MT7Bi-alpha-dpo-v0.2\n\n\n\nDataset automatically created during the evaluation run of model Technoculture/MT7Bi-alpha-dpo-v0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:50:13.790724(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Technoculture/MT7Bi-alpha-dpo-v0.2\n\n\n\nDataset automatically created during the evaluation run of model Technoculture/MT7Bi-alpha-dpo-v0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T12:50:13.790724(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
917712f4a82e54d3cc5bfbe0eaa1189776f3e879
# Beaver Tails This is a cleansed version of [PKU-Alignment/BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails) It has two version based on the sensitivity ## Filtered ### Usage ```python from datasets import load_dataset dataset = load_dataset("Sharathhebbar24/BeaverTails_filtered", split="train") ``` ## Unfiltered ### Usage ```python from datasets import load_dataset dataset = load_dataset("Sharathhebbar24/BeaverTails_unfiltered", split="train") ```
Sharathhebbar24/BeaverTails_unfiltered
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "region:us" ]
2024-02-09T12:59:43+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 185218245, "num_examples": 364170}], "download_size": 97617848, "dataset_size": 185218245}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-09T13:06:59+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
# Beaver Tails This is a cleansed version of PKU-Alignment/BeaverTails It has two version based on the sensitivity ## Filtered ### Usage ## Unfiltered ### Usage
[ "# Beaver Tails\n\nThis is a cleansed version of PKU-Alignment/BeaverTails\n\nIt has two version based on the sensitivity", "## Filtered", "### Usage", "## Unfiltered", "### Usage" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n", "# Beaver Tails\n\nThis is a cleansed version of PKU-Alignment/BeaverTails\n\nIt has two version based on the sensitivity", "## Filtered", "### Usage", "## Unfiltered", "### Usage" ]
6018b1d10fcebc26defa9380b30dbdb6c868aa6d
# Beaver Tails This is a cleansed version of [PKU-Alignment/BeaverTails](https://huggingface.co/datasets/PKU-Alignment/BeaverTails) It has two version based on the sensitivity ## Filtered ### Usage ```python from datasets import load_dataset dataset = load_dataset("Sharathhebbar24/BeaverTails_filtered", split="train") ``` ## Unfiltered ### Usage ```python from datasets import load_dataset dataset = load_dataset("Sharathhebbar24/BeaverTails_unfiltered", split="train") ```
Sharathhebbar24/BeaverTails_filtered
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "region:us" ]
2024-02-09T12:59:52+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78049982, "num_examples": 161784}], "download_size": 40376094, "dataset_size": 78049982}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-10T04:49:33+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
# Beaver Tails This is a cleansed version of PKU-Alignment/BeaverTails It has two version based on the sensitivity ## Filtered ### Usage ## Unfiltered ### Usage
[ "# Beaver Tails\n\nThis is a cleansed version of PKU-Alignment/BeaverTails\n\nIt has two version based on the sensitivity", "## Filtered", "### Usage", "## Unfiltered", "### Usage" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n", "# Beaver Tails\n\nThis is a cleansed version of PKU-Alignment/BeaverTails\n\nIt has two version based on the sensitivity", "## Filtered", "### Usage", "## Unfiltered", "### Usage" ]
955891a8a94080437f8c8adad17bdeefd97befa0
# Dataset Card for Evaluation run of Novocoders/Lotus-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Novocoders/Lotus-7B](https://huggingface.co/Novocoders/Lotus-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Novocoders__Lotus-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T13:03:53.418083](https://huggingface.co/datasets/open-llm-leaderboard/details_Novocoders__Lotus-7B/blob/main/results_2024-02-09T13-03-53.418083.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6496423371761832, "acc_stderr": 0.03215279301715769, "acc_norm": 0.6501043548092335, "acc_norm_stderr": 0.03281746943202546, "mc1": 0.390452876376989, "mc1_stderr": 0.017078230743431448, "mc2": 0.5556599683913465, "mc2_stderr": 0.015308837108837361 }, "harness|arc:challenge|25": { "acc": 0.6245733788395904, "acc_stderr": 0.014150631435111728, "acc_norm": 0.6646757679180887, "acc_norm_stderr": 0.013796182947785562 }, "harness|hellaswag|10": { "acc": 0.6554471220872337, "acc_stderr": 0.0047425103547779025, "acc_norm": 0.8480382393945429, "acc_norm_stderr": 0.003582501596564544 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7105263157894737, "acc_stderr": 0.03690677986137283, "acc_norm": 0.7105263157894737, "acc_norm_stderr": 0.03690677986137283 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7358490566037735, "acc_stderr": 0.027134291628741702, "acc_norm": 0.7358490566037735, "acc_norm_stderr": 0.027134291628741702 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.75, "acc_stderr": 0.03621034121889507, "acc_norm": 0.75, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620332, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.56, "acc_stderr": 0.049888765156985884, "acc_norm": 0.56, "acc_norm_stderr": 0.049888765156985884 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6763005780346821, "acc_stderr": 0.035676037996391706, "acc_norm": 0.6763005780346821, "acc_norm_stderr": 0.035676037996391706 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.04878608714466996, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.04878608714466996 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.04093601807403326, "acc_norm": 0.79, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.574468085106383, "acc_stderr": 0.03232146916224468, "acc_norm": 0.574468085106383, "acc_norm_stderr": 0.03232146916224468 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5, "acc_stderr": 0.047036043419179864, "acc_norm": 0.5, "acc_norm_stderr": 0.047036043419179864 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5379310344827586, "acc_stderr": 0.04154659671707548, "acc_norm": 0.5379310344827586, "acc_norm_stderr": 0.04154659671707548 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4021164021164021, "acc_stderr": 0.025253032554997692, "acc_norm": 0.4021164021164021, "acc_norm_stderr": 0.025253032554997692 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.48412698412698413, "acc_stderr": 0.04469881854072606, "acc_norm": 0.48412698412698413, "acc_norm_stderr": 0.04469881854072606 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7806451612903226, "acc_stderr": 0.023540799358723292, "acc_norm": 0.7806451612903226, "acc_norm_stderr": 0.023540799358723292 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.67, "acc_stderr": 0.047258156262526094, "acc_norm": 0.67, "acc_norm_stderr": 0.047258156262526094 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.0328766675860349, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.0328766675860349 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.797979797979798, "acc_stderr": 0.028606204289229876, "acc_norm": 0.797979797979798, "acc_norm_stderr": 0.028606204289229876 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8860103626943006, "acc_stderr": 0.022935144053919436, "acc_norm": 0.8860103626943006, "acc_norm_stderr": 0.022935144053919436 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6717948717948717, "acc_stderr": 0.023807633198657262, "acc_norm": 0.6717948717948717, "acc_norm_stderr": 0.023807633198657262 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34074074074074073, "acc_stderr": 0.02889774874113114, "acc_norm": 0.34074074074074073, "acc_norm_stderr": 0.02889774874113114 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6764705882352942, "acc_stderr": 0.030388353551886793, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.030388353551886793 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8458715596330275, "acc_stderr": 0.015480826865374303, "acc_norm": 0.8458715596330275, "acc_norm_stderr": 0.015480826865374303 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5185185185185185, "acc_stderr": 0.03407632093854051, "acc_norm": 0.5185185185185185, "acc_norm_stderr": 0.03407632093854051 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8186274509803921, "acc_stderr": 0.027044621719474082, "acc_norm": 0.8186274509803921, "acc_norm_stderr": 0.027044621719474082 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7805907172995781, "acc_stderr": 0.026939106581553945, "acc_norm": 0.7805907172995781, "acc_norm_stderr": 0.026939106581553945 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7085201793721974, "acc_stderr": 0.030500283176545843, "acc_norm": 0.7085201793721974, "acc_norm_stderr": 0.030500283176545843 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7862595419847328, "acc_stderr": 0.0359546161177469, "acc_norm": 0.7862595419847328, "acc_norm_stderr": 0.0359546161177469 }, "harness|hendrycksTest-international_law|5": { "acc": 0.768595041322314, "acc_stderr": 0.03849856098794088, "acc_norm": 0.768595041322314, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7870370370370371, "acc_stderr": 0.039578354719809805, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.039578354719809805 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7730061349693251, "acc_stderr": 0.03291099578615769, "acc_norm": 0.7730061349693251, "acc_norm_stderr": 0.03291099578615769 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8846153846153846, "acc_stderr": 0.020930193185179326, "acc_norm": 0.8846153846153846, "acc_norm_stderr": 0.020930193185179326 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.72, "acc_stderr": 0.045126085985421276, "acc_norm": 0.72, "acc_norm_stderr": 0.045126085985421276 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8288633461047255, "acc_stderr": 0.013468201614066297, "acc_norm": 0.8288633461047255, "acc_norm_stderr": 0.013468201614066297 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7341040462427746, "acc_stderr": 0.023786203255508287, "acc_norm": 0.7341040462427746, "acc_norm_stderr": 0.023786203255508287 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.40782122905027934, "acc_stderr": 0.016435865260914746, "acc_norm": 0.40782122905027934, "acc_norm_stderr": 0.016435865260914746 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7254901960784313, "acc_stderr": 0.025553169991826524, "acc_norm": 0.7254901960784313, "acc_norm_stderr": 0.025553169991826524 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.025494259350694912, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.025494259350694912 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7407407407407407, "acc_stderr": 0.024383665531035457, "acc_norm": 0.7407407407407407, "acc_norm_stderr": 0.024383665531035457 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4858156028368794, "acc_stderr": 0.02981549448368206, "acc_norm": 0.4858156028368794, "acc_norm_stderr": 0.02981549448368206 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4667535853976532, "acc_stderr": 0.012741974333897226, "acc_norm": 0.4667535853976532, "acc_norm_stderr": 0.012741974333897226 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6691176470588235, "acc_stderr": 0.02858270975389845, "acc_norm": 0.6691176470588235, "acc_norm_stderr": 0.02858270975389845 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6666666666666666, "acc_stderr": 0.0190709855896875, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.0190709855896875 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7224489795918367, "acc_stderr": 0.028666857790274648, "acc_norm": 0.7224489795918367, "acc_norm_stderr": 0.028666857790274648 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8606965174129353, "acc_stderr": 0.02448448716291397, "acc_norm": 0.8606965174129353, "acc_norm_stderr": 0.02448448716291397 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.0348735088019777, "acc_norm": 0.86, "acc_norm_stderr": 0.0348735088019777 }, "harness|hendrycksTest-virology|5": { "acc": 0.5481927710843374, "acc_stderr": 0.03874371556587953, "acc_norm": 0.5481927710843374, "acc_norm_stderr": 0.03874371556587953 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8245614035087719, "acc_stderr": 0.029170885500727665, "acc_norm": 0.8245614035087719, "acc_norm_stderr": 0.029170885500727665 }, "harness|truthfulqa:mc|0": { "mc1": 0.390452876376989, "mc1_stderr": 0.017078230743431448, "mc2": 0.5556599683913465, "mc2_stderr": 0.015308837108837361 }, "harness|winogrande|5": { "acc": 0.8216258879242304, "acc_stderr": 0.010759352014855946 }, "harness|gsm8k|5": { "acc": 0.6830932524639879, "acc_stderr": 0.012815868296721364 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Novocoders__Lotus-7B
[ "region:us" ]
2024-02-09T13:06:11+00:00
{"pretty_name": "Evaluation run of Novocoders/Lotus-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Novocoders/Lotus-7B](https://huggingface.co/Novocoders/Lotus-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Novocoders__Lotus-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T13:03:53.418083](https://huggingface.co/datasets/open-llm-leaderboard/details_Novocoders__Lotus-7B/blob/main/results_2024-02-09T13-03-53.418083.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6496423371761832,\n \"acc_stderr\": 0.03215279301715769,\n \"acc_norm\": 0.6501043548092335,\n \"acc_norm_stderr\": 0.03281746943202546,\n \"mc1\": 0.390452876376989,\n \"mc1_stderr\": 0.017078230743431448,\n \"mc2\": 0.5556599683913465,\n \"mc2_stderr\": 0.015308837108837361\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6245733788395904,\n \"acc_stderr\": 0.014150631435111728,\n \"acc_norm\": 0.6646757679180887,\n \"acc_norm_stderr\": 0.013796182947785562\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6554471220872337,\n \"acc_stderr\": 0.0047425103547779025,\n \"acc_norm\": 0.8480382393945429,\n \"acc_norm_stderr\": 0.003582501596564544\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7358490566037735,\n \"acc_stderr\": 0.027134291628741702,\n \"acc_norm\": 0.7358490566037735,\n \"acc_norm_stderr\": 0.027134291628741702\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4021164021164021,\n \"acc_stderr\": 0.025253032554997692,\n \"acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.025253032554997692\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723292,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723292\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.047258156262526094,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.047258156262526094\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.028606204289229876,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229876\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919436,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919436\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657262,\n \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657262\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.02889774874113114,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.02889774874113114\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886793,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886793\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8186274509803921,\n \"acc_stderr\": 0.027044621719474082,\n \"acc_norm\": 0.8186274509803921,\n \"acc_norm_stderr\": 0.027044621719474082\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7085201793721974,\n \"acc_stderr\": 0.030500283176545843,\n \"acc_norm\": 0.7085201793721974,\n \"acc_norm_stderr\": 0.030500283176545843\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.039578354719809805,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.039578354719809805\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.020930193185179326,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.020930193185179326\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8288633461047255,\n \"acc_stderr\": 0.013468201614066297,\n \"acc_norm\": 0.8288633461047255,\n \"acc_norm_stderr\": 0.013468201614066297\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.023786203255508287,\n \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.023786203255508287\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.40782122905027934,\n \"acc_stderr\": 0.016435865260914746,\n \"acc_norm\": 0.40782122905027934,\n \"acc_norm_stderr\": 0.016435865260914746\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.025553169991826524,\n \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.025553169991826524\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n \"acc_stderr\": 0.025494259350694912,\n \"acc_norm\": 0.7202572347266881,\n \"acc_norm_stderr\": 0.025494259350694912\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.024383665531035457,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.024383665531035457\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4667535853976532,\n \"acc_stderr\": 0.012741974333897226,\n \"acc_norm\": 0.4667535853976532,\n \"acc_norm_stderr\": 0.012741974333897226\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6691176470588235,\n \"acc_stderr\": 0.02858270975389845,\n \"acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.02858270975389845\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.0190709855896875,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.0190709855896875\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274648,\n \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274648\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8606965174129353,\n \"acc_stderr\": 0.02448448716291397,\n \"acc_norm\": 0.8606965174129353,\n \"acc_norm_stderr\": 0.02448448716291397\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.390452876376989,\n \"mc1_stderr\": 0.017078230743431448,\n \"mc2\": 0.5556599683913465,\n \"mc2_stderr\": 0.015308837108837361\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8216258879242304,\n \"acc_stderr\": 0.010759352014855946\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6830932524639879,\n \"acc_stderr\": 0.012815868296721364\n }\n}\n```", "repo_url": "https://huggingface.co/Novocoders/Lotus-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-03-53.418083.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["**/details_harness|winogrande|5_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T13-03-53.418083.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T13_03_53.418083", "path": ["results_2024-02-09T13-03-53.418083.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T13-03-53.418083.parquet"]}]}]}
2024-02-09T13:06:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Novocoders/Lotus-7B Dataset automatically created during the evaluation run of model Novocoders/Lotus-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T13:03:53.418083(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Novocoders/Lotus-7B\n\n\n\nDataset automatically created during the evaluation run of model Novocoders/Lotus-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:03:53.418083(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Novocoders/Lotus-7B\n\n\n\nDataset automatically created during the evaluation run of model Novocoders/Lotus-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:03:53.418083(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
6a0c013c366972b643e39829718b7fc8de022fea
# Dataset Card for Evaluation run of BarraHome/rezephyr-dpo <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [BarraHome/rezephyr-dpo](https://huggingface.co/BarraHome/rezephyr-dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_BarraHome__rezephyr-dpo", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T13:18:07.445187](https://huggingface.co/datasets/open-llm-leaderboard/details_BarraHome__rezephyr-dpo/blob/main/results_2024-02-09T13-18-07.445187.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6025963080419425, "acc_stderr": 0.0331436677667824, "acc_norm": 0.6085579671532932, "acc_norm_stderr": 0.03382908262424393, "mc1": 0.29008567931456547, "mc1_stderr": 0.01588623687420952, "mc2": 0.44316239933938906, "mc2_stderr": 0.014631197353059351 }, "harness|arc:challenge|25": { "acc": 0.5358361774744027, "acc_stderr": 0.01457381366473572, "acc_norm": 0.575938566552901, "acc_norm_stderr": 0.014441889627464398 }, "harness|hellaswag|10": { "acc": 0.616211909978092, "acc_stderr": 0.004853134271547766, "acc_norm": 0.8174666401115316, "acc_norm_stderr": 0.0038549403270910264 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5777777777777777, "acc_stderr": 0.04266763404099582, "acc_norm": 0.5777777777777777, "acc_norm_stderr": 0.04266763404099582 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6052631578947368, "acc_stderr": 0.039777499346220734, "acc_norm": 0.6052631578947368, "acc_norm_stderr": 0.039777499346220734 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.57, "acc_stderr": 0.049756985195624284, "acc_norm": 0.57, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.028637235639800893, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.028637235639800893 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7222222222222222, "acc_stderr": 0.037455547914624555, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.037455547914624555 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.53, "acc_stderr": 0.05016135580465919, "acc_norm": 0.53, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6242774566473989, "acc_stderr": 0.036928207672648664, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.036928207672648664 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.73, "acc_stderr": 0.04461960433384739, "acc_norm": 0.73, "acc_norm_stderr": 0.04461960433384739 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5404255319148936, "acc_stderr": 0.03257901482099835, "acc_norm": 0.5404255319148936, "acc_norm_stderr": 0.03257901482099835 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.42105263157894735, "acc_stderr": 0.046446020912223177, "acc_norm": 0.42105263157894735, "acc_norm_stderr": 0.046446020912223177 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5172413793103449, "acc_stderr": 0.04164188720169375, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.04164188720169375 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.40476190476190477, "acc_stderr": 0.025279850397404897, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.025279850397404897 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.40476190476190477, "acc_stderr": 0.04390259265377562, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.04390259265377562 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7580645161290323, "acc_stderr": 0.02436259969303108, "acc_norm": 0.7580645161290323, "acc_norm_stderr": 0.02436259969303108 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4975369458128079, "acc_stderr": 0.03517945038691063, "acc_norm": 0.4975369458128079, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.64, "acc_stderr": 0.048241815132442176, "acc_norm": 0.64, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7454545454545455, "acc_stderr": 0.03401506715249039, "acc_norm": 0.7454545454545455, "acc_norm_stderr": 0.03401506715249039 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7727272727272727, "acc_stderr": 0.02985751567338642, "acc_norm": 0.7727272727272727, "acc_norm_stderr": 0.02985751567338642 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7979274611398963, "acc_stderr": 0.02897908979429673, "acc_norm": 0.7979274611398963, "acc_norm_stderr": 0.02897908979429673 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5743589743589743, "acc_stderr": 0.025069094387296525, "acc_norm": 0.5743589743589743, "acc_norm_stderr": 0.025069094387296525 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34074074074074073, "acc_stderr": 0.028897748741131147, "acc_norm": 0.34074074074074073, "acc_norm_stderr": 0.028897748741131147 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6176470588235294, "acc_stderr": 0.031566630992154156, "acc_norm": 0.6176470588235294, "acc_norm_stderr": 0.031566630992154156 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33774834437086093, "acc_stderr": 0.0386155754625517, "acc_norm": 0.33774834437086093, "acc_norm_stderr": 0.0386155754625517 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7889908256880734, "acc_stderr": 0.01749392240411265, "acc_norm": 0.7889908256880734, "acc_norm_stderr": 0.01749392240411265 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.47685185185185186, "acc_stderr": 0.03406315360711507, "acc_norm": 0.47685185185185186, "acc_norm_stderr": 0.03406315360711507 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7696078431372549, "acc_stderr": 0.029554292605695066, "acc_norm": 0.7696078431372549, "acc_norm_stderr": 0.029554292605695066 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7383966244725738, "acc_stderr": 0.028609516716994934, "acc_norm": 0.7383966244725738, "acc_norm_stderr": 0.028609516716994934 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6367713004484304, "acc_stderr": 0.032277904428505, "acc_norm": 0.6367713004484304, "acc_norm_stderr": 0.032277904428505 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7022900763358778, "acc_stderr": 0.040103589424622034, "acc_norm": 0.7022900763358778, "acc_norm_stderr": 0.040103589424622034 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098823, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098823 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7129629629629629, "acc_stderr": 0.043733130409147614, "acc_norm": 0.7129629629629629, "acc_norm_stderr": 0.043733130409147614 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7055214723926381, "acc_stderr": 0.03581165790474082, "acc_norm": 0.7055214723926381, "acc_norm_stderr": 0.03581165790474082 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.42857142857142855, "acc_stderr": 0.04697113923010212, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.04697113923010212 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.04245022486384495, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.04245022486384495 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8846153846153846, "acc_stderr": 0.020930193185179333, "acc_norm": 0.8846153846153846, "acc_norm_stderr": 0.020930193185179333 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7943805874840357, "acc_stderr": 0.01445250045678583, "acc_norm": 0.7943805874840357, "acc_norm_stderr": 0.01445250045678583 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.684971098265896, "acc_stderr": 0.025009313790069713, "acc_norm": 0.684971098265896, "acc_norm_stderr": 0.025009313790069713 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3217877094972067, "acc_stderr": 0.015624236160792582, "acc_norm": 0.3217877094972067, "acc_norm_stderr": 0.015624236160792582 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6862745098039216, "acc_stderr": 0.02656892101545715, "acc_norm": 0.6862745098039216, "acc_norm_stderr": 0.02656892101545715 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6945337620578779, "acc_stderr": 0.02616058445014045, "acc_norm": 0.6945337620578779, "acc_norm_stderr": 0.02616058445014045 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6604938271604939, "acc_stderr": 0.026348564412011624, "acc_norm": 0.6604938271604939, "acc_norm_stderr": 0.026348564412011624 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.450354609929078, "acc_stderr": 0.029680105565029036, "acc_norm": 0.450354609929078, "acc_norm_stderr": 0.029680105565029036 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4217731421121252, "acc_stderr": 0.012612974369390975, "acc_norm": 0.4217731421121252, "acc_norm_stderr": 0.012612974369390975 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6066176470588235, "acc_stderr": 0.029674288281311155, "acc_norm": 0.6066176470588235, "acc_norm_stderr": 0.029674288281311155 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6094771241830066, "acc_stderr": 0.019737008998094597, "acc_norm": 0.6094771241830066, "acc_norm_stderr": 0.019737008998094597 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6272727272727273, "acc_stderr": 0.04631381319425465, "acc_norm": 0.6272727272727273, "acc_norm_stderr": 0.04631381319425465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6693877551020408, "acc_stderr": 0.030116426296540606, "acc_norm": 0.6693877551020408, "acc_norm_stderr": 0.030116426296540606 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8109452736318408, "acc_stderr": 0.02768691358801303, "acc_norm": 0.8109452736318408, "acc_norm_stderr": 0.02768691358801303 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.81, "acc_stderr": 0.039427724440366234, "acc_norm": 0.81, "acc_norm_stderr": 0.039427724440366234 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.03889951252827216, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.03889951252827216 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8245614035087719, "acc_stderr": 0.02917088550072767, "acc_norm": 0.8245614035087719, "acc_norm_stderr": 0.02917088550072767 }, "harness|truthfulqa:mc|0": { "mc1": 0.29008567931456547, "mc1_stderr": 0.01588623687420952, "mc2": 0.44316239933938906, "mc2_stderr": 0.014631197353059351 }, "harness|winogrande|5": { "acc": 0.7703235990528808, "acc_stderr": 0.011821645601838236 }, "harness|gsm8k|5": { "acc": 0.3244882486732373, "acc_stderr": 0.012896095359768107 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_BarraHome__rezephyr-dpo
[ "region:us" ]
2024-02-09T13:20:25+00:00
{"pretty_name": "Evaluation run of BarraHome/rezephyr-dpo", "dataset_summary": "Dataset automatically created during the evaluation run of model [BarraHome/rezephyr-dpo](https://huggingface.co/BarraHome/rezephyr-dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BarraHome__rezephyr-dpo\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T13:18:07.445187](https://huggingface.co/datasets/open-llm-leaderboard/details_BarraHome__rezephyr-dpo/blob/main/results_2024-02-09T13-18-07.445187.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6025963080419425,\n \"acc_stderr\": 0.0331436677667824,\n \"acc_norm\": 0.6085579671532932,\n \"acc_norm_stderr\": 0.03382908262424393,\n \"mc1\": 0.29008567931456547,\n \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.44316239933938906,\n \"mc2_stderr\": 0.014631197353059351\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5358361774744027,\n \"acc_stderr\": 0.01457381366473572,\n \"acc_norm\": 0.575938566552901,\n \"acc_norm_stderr\": 0.014441889627464398\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.616211909978092,\n \"acc_stderr\": 0.004853134271547766,\n \"acc_norm\": 0.8174666401115316,\n \"acc_norm_stderr\": 0.0038549403270910264\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.039777499346220734,\n \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.039777499346220734\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.028637235639800893,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.028637235639800893\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384739,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384739\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5404255319148936,\n \"acc_stderr\": 0.03257901482099835,\n \"acc_norm\": 0.5404255319148936,\n \"acc_norm_stderr\": 0.03257901482099835\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404897,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404897\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n \"acc_stderr\": 0.02436259969303108,\n \"acc_norm\": 0.7580645161290323,\n \"acc_norm_stderr\": 0.02436259969303108\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7727272727272727,\n \"acc_stderr\": 0.02985751567338642,\n \"acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.02985751567338642\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7979274611398963,\n \"acc_stderr\": 0.02897908979429673,\n \"acc_norm\": 0.7979274611398963,\n \"acc_norm_stderr\": 0.02897908979429673\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5743589743589743,\n \"acc_stderr\": 0.025069094387296525,\n \"acc_norm\": 0.5743589743589743,\n \"acc_norm_stderr\": 0.025069094387296525\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131147,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131147\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.031566630992154156,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.031566630992154156\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.0386155754625517,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.0386155754625517\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7889908256880734,\n \"acc_stderr\": 0.01749392240411265,\n \"acc_norm\": 0.7889908256880734,\n \"acc_norm_stderr\": 0.01749392240411265\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.47685185185185186,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.47685185185185186,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7696078431372549,\n \"acc_stderr\": 0.029554292605695066,\n \"acc_norm\": 0.7696078431372549,\n \"acc_norm_stderr\": 0.029554292605695066\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7383966244725738,\n \"acc_stderr\": 0.028609516716994934,\n \"acc_norm\": 0.7383966244725738,\n \"acc_norm_stderr\": 0.028609516716994934\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6367713004484304,\n \"acc_stderr\": 0.032277904428505,\n \"acc_norm\": 0.6367713004484304,\n \"acc_norm_stderr\": 0.032277904428505\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7022900763358778,\n \"acc_stderr\": 0.040103589424622034,\n \"acc_norm\": 0.7022900763358778,\n \"acc_norm_stderr\": 0.040103589424622034\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098823,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098823\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7943805874840357,\n \"acc_stderr\": 0.01445250045678583,\n \"acc_norm\": 0.7943805874840357,\n \"acc_norm_stderr\": 0.01445250045678583\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.025009313790069713,\n \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.025009313790069713\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3217877094972067,\n \"acc_stderr\": 0.015624236160792582,\n \"acc_norm\": 0.3217877094972067,\n \"acc_norm_stderr\": 0.015624236160792582\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6862745098039216,\n \"acc_stderr\": 0.02656892101545715,\n \"acc_norm\": 0.6862745098039216,\n \"acc_norm_stderr\": 0.02656892101545715\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n \"acc_stderr\": 0.02616058445014045,\n \"acc_norm\": 0.6945337620578779,\n \"acc_norm_stderr\": 0.02616058445014045\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6604938271604939,\n \"acc_stderr\": 0.026348564412011624,\n \"acc_norm\": 0.6604938271604939,\n \"acc_norm_stderr\": 0.026348564412011624\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.450354609929078,\n \"acc_stderr\": 0.029680105565029036,\n \"acc_norm\": 0.450354609929078,\n \"acc_norm_stderr\": 0.029680105565029036\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4217731421121252,\n \"acc_stderr\": 0.012612974369390975,\n \"acc_norm\": 0.4217731421121252,\n \"acc_norm_stderr\": 0.012612974369390975\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6094771241830066,\n \"acc_stderr\": 0.019737008998094597,\n \"acc_norm\": 0.6094771241830066,\n \"acc_norm_stderr\": 0.019737008998094597\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n \"acc_stderr\": 0.04631381319425465,\n \"acc_norm\": 0.6272727272727273,\n \"acc_norm_stderr\": 0.04631381319425465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6693877551020408,\n \"acc_stderr\": 0.030116426296540606,\n \"acc_norm\": 0.6693877551020408,\n \"acc_norm_stderr\": 0.030116426296540606\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n \"acc_stderr\": 0.02768691358801303,\n \"acc_norm\": 0.8109452736318408,\n \"acc_norm_stderr\": 0.02768691358801303\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29008567931456547,\n \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.44316239933938906,\n \"mc2_stderr\": 0.014631197353059351\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7703235990528808,\n \"acc_stderr\": 0.011821645601838236\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3244882486732373,\n \"acc_stderr\": 0.012896095359768107\n }\n}\n```", "repo_url": "https://huggingface.co/BarraHome/rezephyr-dpo", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-18-07.445187.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["**/details_harness|winogrande|5_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T13-18-07.445187.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T13_18_07.445187", "path": ["results_2024-02-09T13-18-07.445187.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T13-18-07.445187.parquet"]}]}]}
2024-02-09T13:20:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of BarraHome/rezephyr-dpo Dataset automatically created during the evaluation run of model BarraHome/rezephyr-dpo on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T13:18:07.445187(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of BarraHome/rezephyr-dpo\n\n\n\nDataset automatically created during the evaluation run of model BarraHome/rezephyr-dpo on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:18:07.445187(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of BarraHome/rezephyr-dpo\n\n\n\nDataset automatically created during the evaluation run of model BarraHome/rezephyr-dpo on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:18:07.445187(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
a265762a474d63ea6da299457f689d0365ac352d
# Dataset Card for Evaluation run of kenhktsui/nano-phi-115M-v0.1 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [kenhktsui/nano-phi-115M-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T13:21:05.777292](https://huggingface.co/datasets/open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1/blob/main/results_2024-02-09T13-21-05.777292.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.2525471471328879, "acc_stderr": 0.030613616045154307, "acc_norm": 0.25320771353725297, "acc_norm_stderr": 0.031427270253272646, "mc1": 0.25091799265605874, "mc1_stderr": 0.015176985027707693, "mc2": 0.4600301977928377, "mc2_stderr": 0.015417429651937565 }, "harness|arc:challenge|25": { "acc": 0.1825938566552901, "acc_stderr": 0.011289730684564993, "acc_norm": 0.21928327645051193, "acc_norm_stderr": 0.012091245787615735 }, "harness|hellaswag|10": { "acc": 0.2731527584146584, "acc_stderr": 0.004446680081493753, "acc_norm": 0.2786297550288787, "acc_norm_stderr": 0.0044740864899406865 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.23, "acc_stderr": 0.04229525846816507, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816507 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.3111111111111111, "acc_stderr": 0.039992628766177235, "acc_norm": 0.3111111111111111, "acc_norm_stderr": 0.039992628766177235 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.21710526315789475, "acc_stderr": 0.033550453048829205, "acc_norm": 0.21710526315789475, "acc_norm_stderr": 0.033550453048829205 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.22264150943396227, "acc_stderr": 0.025604233470899098, "acc_norm": 0.22264150943396227, "acc_norm_stderr": 0.025604233470899098 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.22916666666666666, "acc_stderr": 0.03514697467862388, "acc_norm": 0.22916666666666666, "acc_norm_stderr": 0.03514697467862388 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.17, "acc_stderr": 0.03775251680686371, "acc_norm": 0.17, "acc_norm_stderr": 0.03775251680686371 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.26, "acc_stderr": 0.04408440022768078, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.2023121387283237, "acc_stderr": 0.030631145539198816, "acc_norm": 0.2023121387283237, "acc_norm_stderr": 0.030631145539198816 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3235294117647059, "acc_stderr": 0.046550104113196177, "acc_norm": 0.3235294117647059, "acc_norm_stderr": 0.046550104113196177 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.26, "acc_stderr": 0.044084400227680794, "acc_norm": 0.26, "acc_norm_stderr": 0.044084400227680794 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.2425531914893617, "acc_stderr": 0.028020226271200217, "acc_norm": 0.2425531914893617, "acc_norm_stderr": 0.028020226271200217 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.22807017543859648, "acc_stderr": 0.03947152782669415, "acc_norm": 0.22807017543859648, "acc_norm_stderr": 0.03947152782669415 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.22758620689655173, "acc_stderr": 0.03493950380131184, "acc_norm": 0.22758620689655173, "acc_norm_stderr": 0.03493950380131184 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.24603174603174602, "acc_stderr": 0.022182037202948368, "acc_norm": 0.24603174603174602, "acc_norm_stderr": 0.022182037202948368 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.15079365079365079, "acc_stderr": 0.03200686497287392, "acc_norm": 0.15079365079365079, "acc_norm_stderr": 0.03200686497287392 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.3387096774193548, "acc_stderr": 0.026923446059302844, "acc_norm": 0.3387096774193548, "acc_norm_stderr": 0.026923446059302844 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.29064039408866993, "acc_stderr": 0.0319474007226554, "acc_norm": 0.29064039408866993, "acc_norm_stderr": 0.0319474007226554 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.31, "acc_stderr": 0.04648231987117317, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117317 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.21818181818181817, "acc_stderr": 0.03225078108306289, "acc_norm": 0.21818181818181817, "acc_norm_stderr": 0.03225078108306289 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.3181818181818182, "acc_stderr": 0.0331847733384533, "acc_norm": 0.3181818181818182, "acc_norm_stderr": 0.0331847733384533 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.20207253886010362, "acc_stderr": 0.02897908979429673, "acc_norm": 0.20207253886010362, "acc_norm_stderr": 0.02897908979429673 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.24871794871794872, "acc_stderr": 0.0219169577092138, "acc_norm": 0.24871794871794872, "acc_norm_stderr": 0.0219169577092138 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2777777777777778, "acc_stderr": 0.027309140588230186, "acc_norm": 0.2777777777777778, "acc_norm_stderr": 0.027309140588230186 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.3277310924369748, "acc_stderr": 0.03048991141767323, "acc_norm": 0.3277310924369748, "acc_norm_stderr": 0.03048991141767323 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.03822746937658753, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.03822746937658753 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.24954128440366974, "acc_stderr": 0.018553897629501617, "acc_norm": 0.24954128440366974, "acc_norm_stderr": 0.018553897629501617 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4722222222222222, "acc_stderr": 0.0340470532865388, "acc_norm": 0.4722222222222222, "acc_norm_stderr": 0.0340470532865388 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.2696078431372549, "acc_stderr": 0.03114557065948678, "acc_norm": 0.2696078431372549, "acc_norm_stderr": 0.03114557065948678 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.21518987341772153, "acc_stderr": 0.026750826994676152, "acc_norm": 0.21518987341772153, "acc_norm_stderr": 0.026750826994676152 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.19730941704035873, "acc_stderr": 0.02670985334496796, "acc_norm": 0.19730941704035873, "acc_norm_stderr": 0.02670985334496796 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2824427480916031, "acc_stderr": 0.03948406125768361, "acc_norm": 0.2824427480916031, "acc_norm_stderr": 0.03948406125768361 }, "harness|hendrycksTest-international_law|5": { "acc": 0.23140495867768596, "acc_stderr": 0.03849856098794088, "acc_norm": 0.23140495867768596, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.2222222222222222, "acc_stderr": 0.040191074725573483, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.040191074725573483 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.294478527607362, "acc_stderr": 0.03581165790474082, "acc_norm": 0.294478527607362, "acc_norm_stderr": 0.03581165790474082 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.2767857142857143, "acc_stderr": 0.04246624336697624, "acc_norm": 0.2767857142857143, "acc_norm_stderr": 0.04246624336697624 }, "harness|hendrycksTest-management|5": { "acc": 0.20388349514563106, "acc_stderr": 0.03989139859531772, "acc_norm": 0.20388349514563106, "acc_norm_stderr": 0.03989139859531772 }, "harness|hendrycksTest-marketing|5": { "acc": 0.19658119658119658, "acc_stderr": 0.02603538609895129, "acc_norm": 0.19658119658119658, "acc_norm_stderr": 0.02603538609895129 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.28, "acc_stderr": 0.04512608598542127, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.2720306513409962, "acc_stderr": 0.015913367447500524, "acc_norm": 0.2720306513409962, "acc_norm_stderr": 0.015913367447500524 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.20520231213872833, "acc_stderr": 0.021742519835276287, "acc_norm": 0.20520231213872833, "acc_norm_stderr": 0.021742519835276287 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24692737430167597, "acc_stderr": 0.014422292204808835, "acc_norm": 0.24692737430167597, "acc_norm_stderr": 0.014422292204808835 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.2647058823529412, "acc_stderr": 0.02526169121972948, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.02526169121972948 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.19292604501607716, "acc_stderr": 0.022411516780911363, "acc_norm": 0.19292604501607716, "acc_norm_stderr": 0.022411516780911363 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.23765432098765432, "acc_stderr": 0.023683591837008557, "acc_norm": 0.23765432098765432, "acc_norm_stderr": 0.023683591837008557 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.24822695035460993, "acc_stderr": 0.025770015644290382, "acc_norm": 0.24822695035460993, "acc_norm_stderr": 0.025770015644290382 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.24641460234680573, "acc_stderr": 0.011005971399927235, "acc_norm": 0.24641460234680573, "acc_norm_stderr": 0.011005971399927235 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4264705882352941, "acc_stderr": 0.03004261583271486, "acc_norm": 0.4264705882352941, "acc_norm_stderr": 0.03004261583271486 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.2630718954248366, "acc_stderr": 0.01781267654232065, "acc_norm": 0.2630718954248366, "acc_norm_stderr": 0.01781267654232065 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.17272727272727273, "acc_stderr": 0.03620691833929218, "acc_norm": 0.17272727272727273, "acc_norm_stderr": 0.03620691833929218 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.2, "acc_stderr": 0.025607375986579153, "acc_norm": 0.2, "acc_norm_stderr": 0.025607375986579153 }, "harness|hendrycksTest-sociology|5": { "acc": 0.22885572139303484, "acc_stderr": 0.029705284056772436, "acc_norm": 0.22885572139303484, "acc_norm_stderr": 0.029705284056772436 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.24, "acc_stderr": 0.04292346959909283, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-virology|5": { "acc": 0.20481927710843373, "acc_stderr": 0.03141784291663925, "acc_norm": 0.20481927710843373, "acc_norm_stderr": 0.03141784291663925 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.2222222222222222, "acc_stderr": 0.03188578017686398, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.03188578017686398 }, "harness|truthfulqa:mc|0": { "mc1": 0.25091799265605874, "mc1_stderr": 0.015176985027707693, "mc2": 0.4600301977928377, "mc2_stderr": 0.015417429651937565 }, "harness|winogrande|5": { "acc": 0.5082872928176796, "acc_stderr": 0.014050555322824194 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1
[ "region:us" ]
2024-02-09T13:22:26+00:00
{"pretty_name": "Evaluation run of kenhktsui/nano-phi-115M-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [kenhktsui/nano-phi-115M-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T13:21:05.777292](https://huggingface.co/datasets/open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1/blob/main/results_2024-02-09T13-21-05.777292.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2525471471328879,\n \"acc_stderr\": 0.030613616045154307,\n \"acc_norm\": 0.25320771353725297,\n \"acc_norm_stderr\": 0.031427270253272646,\n \"mc1\": 0.25091799265605874,\n \"mc1_stderr\": 0.015176985027707693,\n \"mc2\": 0.4600301977928377,\n \"mc2_stderr\": 0.015417429651937565\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.1825938566552901,\n \"acc_stderr\": 0.011289730684564993,\n \"acc_norm\": 0.21928327645051193,\n \"acc_norm_stderr\": 0.012091245787615735\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2731527584146584,\n \"acc_stderr\": 0.004446680081493753,\n \"acc_norm\": 0.2786297550288787,\n \"acc_norm_stderr\": 0.0044740864899406865\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3111111111111111,\n \"acc_stderr\": 0.039992628766177235,\n \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.039992628766177235\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.21710526315789475,\n \"acc_stderr\": 0.033550453048829205,\n \"acc_norm\": 0.21710526315789475,\n \"acc_norm_stderr\": 0.033550453048829205\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.22264150943396227,\n \"acc_stderr\": 0.025604233470899098,\n \"acc_norm\": 0.22264150943396227,\n \"acc_norm_stderr\": 0.025604233470899098\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.22916666666666666,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.22916666666666666,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2023121387283237,\n \"acc_stderr\": 0.030631145539198816,\n \"acc_norm\": 0.2023121387283237,\n \"acc_norm_stderr\": 0.030631145539198816\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.046550104113196177,\n \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.046550104113196177\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2425531914893617,\n \"acc_stderr\": 0.028020226271200217,\n \"acc_norm\": 0.2425531914893617,\n \"acc_norm_stderr\": 0.028020226271200217\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.22807017543859648,\n \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.22807017543859648,\n \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.22758620689655173,\n \"acc_stderr\": 0.03493950380131184,\n \"acc_norm\": 0.22758620689655173,\n \"acc_norm_stderr\": 0.03493950380131184\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.022182037202948368,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.022182037202948368\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15079365079365079,\n \"acc_stderr\": 0.03200686497287392,\n \"acc_norm\": 0.15079365079365079,\n \"acc_norm_stderr\": 0.03200686497287392\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.3387096774193548,\n \"acc_stderr\": 0.026923446059302844,\n \"acc_norm\": 0.3387096774193548,\n \"acc_norm_stderr\": 0.026923446059302844\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.29064039408866993,\n \"acc_stderr\": 0.0319474007226554,\n \"acc_norm\": 0.29064039408866993,\n \"acc_norm_stderr\": 0.0319474007226554\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117317,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117317\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.3181818181818182,\n \"acc_stderr\": 0.0331847733384533,\n \"acc_norm\": 0.3181818181818182,\n \"acc_norm_stderr\": 0.0331847733384533\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.20207253886010362,\n \"acc_stderr\": 0.02897908979429673,\n \"acc_norm\": 0.20207253886010362,\n \"acc_norm_stderr\": 0.02897908979429673\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.24871794871794872,\n \"acc_stderr\": 0.0219169577092138,\n \"acc_norm\": 0.24871794871794872,\n \"acc_norm_stderr\": 0.0219169577092138\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.027309140588230186,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.027309140588230186\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.3277310924369748,\n \"acc_stderr\": 0.03048991141767323,\n \"acc_norm\": 0.3277310924369748,\n \"acc_norm_stderr\": 0.03048991141767323\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.24954128440366974,\n \"acc_stderr\": 0.018553897629501617,\n \"acc_norm\": 0.24954128440366974,\n \"acc_norm_stderr\": 0.018553897629501617\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2696078431372549,\n \"acc_stderr\": 0.03114557065948678,\n \"acc_norm\": 0.2696078431372549,\n \"acc_norm_stderr\": 0.03114557065948678\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.21518987341772153,\n \"acc_stderr\": 0.026750826994676152,\n \"acc_norm\": 0.21518987341772153,\n \"acc_norm_stderr\": 0.026750826994676152\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.19730941704035873,\n \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.19730941704035873,\n \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2824427480916031,\n \"acc_stderr\": 0.03948406125768361,\n \"acc_norm\": 0.2824427480916031,\n \"acc_norm_stderr\": 0.03948406125768361\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.294478527607362,\n \"acc_stderr\": 0.03581165790474082,\n \"acc_norm\": 0.294478527607362,\n \"acc_norm_stderr\": 0.03581165790474082\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.2767857142857143,\n \"acc_stderr\": 0.04246624336697624,\n \"acc_norm\": 0.2767857142857143,\n \"acc_norm_stderr\": 0.04246624336697624\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.20388349514563106,\n \"acc_stderr\": 0.03989139859531772,\n \"acc_norm\": 0.20388349514563106,\n \"acc_norm_stderr\": 0.03989139859531772\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19658119658119658,\n \"acc_stderr\": 0.02603538609895129,\n \"acc_norm\": 0.19658119658119658,\n \"acc_norm_stderr\": 0.02603538609895129\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2720306513409962,\n \"acc_stderr\": 0.015913367447500524,\n \"acc_norm\": 0.2720306513409962,\n \"acc_norm_stderr\": 0.015913367447500524\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.20520231213872833,\n \"acc_stderr\": 0.021742519835276287,\n \"acc_norm\": 0.20520231213872833,\n \"acc_norm_stderr\": 0.021742519835276287\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.02526169121972948,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.02526169121972948\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.19292604501607716,\n \"acc_stderr\": 0.022411516780911363,\n \"acc_norm\": 0.19292604501607716,\n \"acc_norm_stderr\": 0.022411516780911363\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.23765432098765432,\n \"acc_stderr\": 0.023683591837008557,\n \"acc_norm\": 0.23765432098765432,\n \"acc_norm_stderr\": 0.023683591837008557\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.24822695035460993,\n \"acc_stderr\": 0.025770015644290382,\n \"acc_norm\": 0.24822695035460993,\n \"acc_norm_stderr\": 0.025770015644290382\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24641460234680573,\n \"acc_stderr\": 0.011005971399927235,\n \"acc_norm\": 0.24641460234680573,\n \"acc_norm_stderr\": 0.011005971399927235\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4264705882352941,\n \"acc_stderr\": 0.03004261583271486,\n \"acc_norm\": 0.4264705882352941,\n \"acc_norm_stderr\": 0.03004261583271486\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2630718954248366,\n \"acc_stderr\": 0.01781267654232065,\n \"acc_norm\": 0.2630718954248366,\n \"acc_norm_stderr\": 0.01781267654232065\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.17272727272727273,\n \"acc_stderr\": 0.03620691833929218,\n \"acc_norm\": 0.17272727272727273,\n \"acc_norm_stderr\": 0.03620691833929218\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.025607375986579153,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.025607375986579153\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.22885572139303484,\n \"acc_stderr\": 0.029705284056772436,\n \"acc_norm\": 0.22885572139303484,\n \"acc_norm_stderr\": 0.029705284056772436\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.20481927710843373,\n \"acc_stderr\": 0.03141784291663925,\n \"acc_norm\": 0.20481927710843373,\n \"acc_norm_stderr\": 0.03141784291663925\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.03188578017686398,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.03188578017686398\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.25091799265605874,\n \"mc1_stderr\": 0.015176985027707693,\n \"mc2\": 0.4600301977928377,\n \"mc2_stderr\": 0.015417429651937565\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5082872928176796,\n \"acc_stderr\": 0.014050555322824194\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/kenhktsui/nano-phi-115M-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-21-05.777292.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["**/details_harness|winogrande|5_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T13-21-05.777292.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T13_21_05.777292", "path": ["results_2024-02-09T13-21-05.777292.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T13-21-05.777292.parquet"]}]}]}
2024-02-09T13:22:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of kenhktsui/nano-phi-115M-v0.1 Dataset automatically created during the evaluation run of model kenhktsui/nano-phi-115M-v0.1 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T13:21:05.777292(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of kenhktsui/nano-phi-115M-v0.1\n\n\n\nDataset automatically created during the evaluation run of model kenhktsui/nano-phi-115M-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:21:05.777292(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of kenhktsui/nano-phi-115M-v0.1\n\n\n\nDataset automatically created during the evaluation run of model kenhktsui/nano-phi-115M-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:21:05.777292(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
4a9731217921bc476a0f03544f11f22ae4903fa5
1M OpenAI Embeddings: text-embedding-3-large 1536 dimensions - Created: February 2024. - Text used for Embedding: title (string) + text (string) - Embedding Model: OpenAI text-embedding-3-large - This dataset was generated from the first 1M entries of https://huggingface.co/datasets/BeIR/dbpedia-entity, extracted by @KShivendu_ [here](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M)
Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M
[ "task_categories:feature-extraction", "size_categories:1M<n<10M", "language:en", "license:mit", "region:us" ]
2024-02-09T13:31:30+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["feature-extraction"], "dataset_info": {"features": [{"name": "_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "text-embedding-3-large-1536-embedding", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 12679725776, "num_examples": 1000000}], "download_size": 9551862565, "dataset_size": 12679725776}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-09T13:50:20+00:00
[]
[ "en" ]
TAGS #task_categories-feature-extraction #size_categories-1M<n<10M #language-English #license-mit #region-us
1M OpenAI Embeddings: text-embedding-3-large 1536 dimensions - Created: February 2024. - Text used for Embedding: title (string) + text (string) - Embedding Model: OpenAI text-embedding-3-large - This dataset was generated from the first 1M entries of URL extracted by @KShivendu_ here
[]
[ "TAGS\n#task_categories-feature-extraction #size_categories-1M<n<10M #language-English #license-mit #region-us \n" ]
f37603c619ef8ca0c19155e31161d60877342c10
# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter0 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [splm/zephyr-7b-sft-full-spin-peft-iter0](https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter0", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T13:37:48.031125](https://huggingface.co/datasets/open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter0/blob/main/results_2024-02-09T13-37-48.031125.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5991555130658271, "acc_stderr": 0.03303119108439422, "acc_norm": 0.6056959434349909, "acc_norm_stderr": 0.03371990380778389, "mc1": 0.27906976744186046, "mc1_stderr": 0.0157021070906279, "mc2": 0.41792638643928276, "mc2_stderr": 0.014677325519327572 }, "harness|arc:challenge|25": { "acc": 0.5452218430034129, "acc_stderr": 0.014551507060836355, "acc_norm": 0.5793515358361775, "acc_norm_stderr": 0.0144262112525084 }, "harness|hellaswag|10": { "acc": 0.6090420235012945, "acc_stderr": 0.0048696773308012945, "acc_norm": 0.8077076279625572, "acc_norm_stderr": 0.0039329609740080766 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5777777777777777, "acc_stderr": 0.04266763404099582, "acc_norm": 0.5777777777777777, "acc_norm_stderr": 0.04266763404099582 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6513157894736842, "acc_stderr": 0.03878139888797611, "acc_norm": 0.6513157894736842, "acc_norm_stderr": 0.03878139888797611 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6679245283018868, "acc_stderr": 0.02898545565233439, "acc_norm": 0.6679245283018868, "acc_norm_stderr": 0.02898545565233439 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6805555555555556, "acc_stderr": 0.03899073687357335, "acc_norm": 0.6805555555555556, "acc_norm_stderr": 0.03899073687357335 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6242774566473989, "acc_stderr": 0.036928207672648664, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.036928207672648664 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.27450980392156865, "acc_stderr": 0.04440521906179328, "acc_norm": 0.27450980392156865, "acc_norm_stderr": 0.04440521906179328 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.81, "acc_stderr": 0.039427724440366234, "acc_norm": 0.81, "acc_norm_stderr": 0.039427724440366234 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5319148936170213, "acc_stderr": 0.03261936918467382, "acc_norm": 0.5319148936170213, "acc_norm_stderr": 0.03261936918467382 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.40350877192982454, "acc_stderr": 0.04615186962583703, "acc_norm": 0.40350877192982454, "acc_norm_stderr": 0.04615186962583703 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3941798941798942, "acc_stderr": 0.025167982333894143, "acc_norm": 0.3941798941798942, "acc_norm_stderr": 0.025167982333894143 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3888888888888889, "acc_stderr": 0.04360314860077459, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.04360314860077459 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7322580645161291, "acc_stderr": 0.025189006660212385, "acc_norm": 0.7322580645161291, "acc_norm_stderr": 0.025189006660212385 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5221674876847291, "acc_stderr": 0.03514528562175008, "acc_norm": 0.5221674876847291, "acc_norm_stderr": 0.03514528562175008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145633, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7333333333333333, "acc_stderr": 0.03453131801885416, "acc_norm": 0.7333333333333333, "acc_norm_stderr": 0.03453131801885416 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7676767676767676, "acc_stderr": 0.030088629490217487, "acc_norm": 0.7676767676767676, "acc_norm_stderr": 0.030088629490217487 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8290155440414507, "acc_stderr": 0.02717121368316453, "acc_norm": 0.8290155440414507, "acc_norm_stderr": 0.02717121368316453 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5923076923076923, "acc_stderr": 0.02491524398598785, "acc_norm": 0.5923076923076923, "acc_norm_stderr": 0.02491524398598785 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.362962962962963, "acc_stderr": 0.029318203645206865, "acc_norm": 0.362962962962963, "acc_norm_stderr": 0.029318203645206865 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6260504201680672, "acc_stderr": 0.03142946637883708, "acc_norm": 0.6260504201680672, "acc_norm_stderr": 0.03142946637883708 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.03822746937658753, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.03822746937658753 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7908256880733945, "acc_stderr": 0.017437937173343226, "acc_norm": 0.7908256880733945, "acc_norm_stderr": 0.017437937173343226 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4305555555555556, "acc_stderr": 0.03376922151252336, "acc_norm": 0.4305555555555556, "acc_norm_stderr": 0.03376922151252336 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7450980392156863, "acc_stderr": 0.030587591351604246, "acc_norm": 0.7450980392156863, "acc_norm_stderr": 0.030587591351604246 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.729957805907173, "acc_stderr": 0.028900721906293433, "acc_norm": 0.729957805907173, "acc_norm_stderr": 0.028900721906293433 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6681614349775785, "acc_stderr": 0.03160295143776679, "acc_norm": 0.6681614349775785, "acc_norm_stderr": 0.03160295143776679 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7251908396946565, "acc_stderr": 0.03915345408847834, "acc_norm": 0.7251908396946565, "acc_norm_stderr": 0.03915345408847834 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990947, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990947 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7685185185185185, "acc_stderr": 0.04077494709252627, "acc_norm": 0.7685185185185185, "acc_norm_stderr": 0.04077494709252627 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6993865030674846, "acc_stderr": 0.03602511318806771, "acc_norm": 0.6993865030674846, "acc_norm_stderr": 0.03602511318806771 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.04245022486384495, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.04245022486384495 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8333333333333334, "acc_stderr": 0.024414947304543674, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.024414947304543674 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.68, "acc_stderr": 0.046882617226215034, "acc_norm": 0.68, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7816091954022989, "acc_stderr": 0.014774358319934493, "acc_norm": 0.7816091954022989, "acc_norm_stderr": 0.014774358319934493 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6820809248554913, "acc_stderr": 0.025070713719153176, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.025070713719153176 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.358659217877095, "acc_stderr": 0.016040454426164467, "acc_norm": 0.358659217877095, "acc_norm_stderr": 0.016040454426164467 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6797385620915033, "acc_stderr": 0.026716118380156847, "acc_norm": 0.6797385620915033, "acc_norm_stderr": 0.026716118380156847 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6655948553054662, "acc_stderr": 0.026795422327893934, "acc_norm": 0.6655948553054662, "acc_norm_stderr": 0.026795422327893934 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6512345679012346, "acc_stderr": 0.02651759772446501, "acc_norm": 0.6512345679012346, "acc_norm_stderr": 0.02651759772446501 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4432624113475177, "acc_stderr": 0.029634838473766006, "acc_norm": 0.4432624113475177, "acc_norm_stderr": 0.029634838473766006 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.41916558018252936, "acc_stderr": 0.012602244505788236, "acc_norm": 0.41916558018252936, "acc_norm_stderr": 0.012602244505788236 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5882352941176471, "acc_stderr": 0.02989616303312547, "acc_norm": 0.5882352941176471, "acc_norm_stderr": 0.02989616303312547 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6143790849673203, "acc_stderr": 0.019691459052354032, "acc_norm": 0.6143790849673203, "acc_norm_stderr": 0.019691459052354032 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6, "acc_stderr": 0.0469237132203465, "acc_norm": 0.6, "acc_norm_stderr": 0.0469237132203465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6816326530612244, "acc_stderr": 0.029822533793982062, "acc_norm": 0.6816326530612244, "acc_norm_stderr": 0.029822533793982062 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8159203980099502, "acc_stderr": 0.027403859410786855, "acc_norm": 0.8159203980099502, "acc_norm_stderr": 0.027403859410786855 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.81, "acc_stderr": 0.039427724440366255, "acc_norm": 0.81, "acc_norm_stderr": 0.039427724440366255 }, "harness|hendrycksTest-virology|5": { "acc": 0.5060240963855421, "acc_stderr": 0.03892212195333045, "acc_norm": 0.5060240963855421, "acc_norm_stderr": 0.03892212195333045 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640038, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640038 }, "harness|truthfulqa:mc|0": { "mc1": 0.27906976744186046, "mc1_stderr": 0.0157021070906279, "mc2": 0.41792638643928276, "mc2_stderr": 0.014677325519327572 }, "harness|winogrande|5": { "acc": 0.7624309392265194, "acc_stderr": 0.011961298905803159 }, "harness|gsm8k|5": { "acc": 0.28278999241849884, "acc_stderr": 0.012405020417873619 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter0
[ "region:us" ]
2024-02-09T13:40:12+00:00
{"pretty_name": "Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter0", "dataset_summary": "Dataset automatically created during the evaluation run of model [splm/zephyr-7b-sft-full-spin-peft-iter0](https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T13:37:48.031125](https://huggingface.co/datasets/open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter0/blob/main/results_2024-02-09T13-37-48.031125.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5991555130658271,\n \"acc_stderr\": 0.03303119108439422,\n \"acc_norm\": 0.6056959434349909,\n \"acc_norm_stderr\": 0.03371990380778389,\n \"mc1\": 0.27906976744186046,\n \"mc1_stderr\": 0.0157021070906279,\n \"mc2\": 0.41792638643928276,\n \"mc2_stderr\": 0.014677325519327572\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5452218430034129,\n \"acc_stderr\": 0.014551507060836355,\n \"acc_norm\": 0.5793515358361775,\n \"acc_norm_stderr\": 0.0144262112525084\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6090420235012945,\n \"acc_stderr\": 0.0048696773308012945,\n \"acc_norm\": 0.8077076279625572,\n \"acc_norm_stderr\": 0.0039329609740080766\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.03878139888797611,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.03878139888797611\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.02898545565233439,\n \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.02898545565233439\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.03899073687357335,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.03899073687357335\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.04440521906179328,\n \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.04440521906179328\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467382,\n \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467382\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.04615186962583703,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.04615186962583703\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3941798941798942,\n \"acc_stderr\": 0.025167982333894143,\n \"acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.025167982333894143\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7322580645161291,\n \"acc_stderr\": 0.025189006660212385,\n \"acc_norm\": 0.7322580645161291,\n \"acc_norm_stderr\": 0.025189006660212385\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.03453131801885416,\n \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.03453131801885416\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217487,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217487\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8290155440414507,\n \"acc_stderr\": 0.02717121368316453,\n \"acc_norm\": 0.8290155440414507,\n \"acc_norm_stderr\": 0.02717121368316453\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5923076923076923,\n \"acc_stderr\": 0.02491524398598785,\n \"acc_norm\": 0.5923076923076923,\n \"acc_norm_stderr\": 0.02491524398598785\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.029318203645206865,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.029318203645206865\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6260504201680672,\n \"acc_stderr\": 0.03142946637883708,\n \"acc_norm\": 0.6260504201680672,\n \"acc_norm_stderr\": 0.03142946637883708\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7908256880733945,\n \"acc_stderr\": 0.017437937173343226,\n \"acc_norm\": 0.7908256880733945,\n \"acc_norm_stderr\": 0.017437937173343226\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4305555555555556,\n \"acc_stderr\": 0.03376922151252336,\n \"acc_norm\": 0.4305555555555556,\n \"acc_norm_stderr\": 0.03376922151252336\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.030587591351604246,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.030587591351604246\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.729957805907173,\n \"acc_stderr\": 0.028900721906293433,\n \"acc_norm\": 0.729957805907173,\n \"acc_norm_stderr\": 0.028900721906293433\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.03915345408847834,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.03915345408847834\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6993865030674846,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.6993865030674846,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.024414947304543674,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.024414947304543674\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7816091954022989,\n \"acc_stderr\": 0.014774358319934493,\n \"acc_norm\": 0.7816091954022989,\n \"acc_norm_stderr\": 0.014774358319934493\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.025070713719153176,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.025070713719153176\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.358659217877095,\n \"acc_stderr\": 0.016040454426164467,\n \"acc_norm\": 0.358659217877095,\n \"acc_norm_stderr\": 0.016040454426164467\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.026716118380156847,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.026716118380156847\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6655948553054662,\n \"acc_stderr\": 0.026795422327893934,\n \"acc_norm\": 0.6655948553054662,\n \"acc_norm_stderr\": 0.026795422327893934\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6512345679012346,\n \"acc_stderr\": 0.02651759772446501,\n \"acc_norm\": 0.6512345679012346,\n \"acc_norm_stderr\": 0.02651759772446501\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4432624113475177,\n \"acc_stderr\": 0.029634838473766006,\n \"acc_norm\": 0.4432624113475177,\n \"acc_norm_stderr\": 0.029634838473766006\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41916558018252936,\n \"acc_stderr\": 0.012602244505788236,\n \"acc_norm\": 0.41916558018252936,\n \"acc_norm_stderr\": 0.012602244505788236\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.02989616303312547,\n \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.02989616303312547\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6143790849673203,\n \"acc_stderr\": 0.019691459052354032,\n \"acc_norm\": 0.6143790849673203,\n \"acc_norm_stderr\": 0.019691459052354032\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.0469237132203465,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.0469237132203465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6816326530612244,\n \"acc_stderr\": 0.029822533793982062,\n \"acc_norm\": 0.6816326530612244,\n \"acc_norm_stderr\": 0.029822533793982062\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8159203980099502,\n \"acc_stderr\": 0.027403859410786855,\n \"acc_norm\": 0.8159203980099502,\n \"acc_norm_stderr\": 0.027403859410786855\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366255,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366255\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.27906976744186046,\n \"mc1_stderr\": 0.0157021070906279,\n \"mc2\": 0.41792638643928276,\n \"mc2_stderr\": 0.014677325519327572\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7624309392265194,\n \"acc_stderr\": 0.011961298905803159\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.28278999241849884,\n \"acc_stderr\": 0.012405020417873619\n }\n}\n```", "repo_url": "https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-37-48.031125.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["**/details_harness|winogrande|5_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T13-37-48.031125.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T13_37_48.031125", "path": ["results_2024-02-09T13-37-48.031125.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T13-37-48.031125.parquet"]}]}]}
2024-02-09T13:40:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter0 Dataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter0 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T13:37:48.031125(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter0\n\n\n\nDataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:37:48.031125(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter0\n\n\n\nDataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:37:48.031125(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
893e5cb90b89574cf6d6c5130b1fcbee31d2a39e
# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter1 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [splm/zephyr-7b-sft-full-spin-peft-iter1](https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T13:43:40.129900](https://huggingface.co/datasets/open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter1/blob/main/results_2024-02-09T13-43-40.129900.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5989379231736949, "acc_stderr": 0.03303973140209912, "acc_norm": 0.6053827154272026, "acc_norm_stderr": 0.0337275433659281, "mc1": 0.28151774785801714, "mc1_stderr": 0.01574402724825605, "mc2": 0.417959572878825, "mc2_stderr": 0.014678135159441788 }, "harness|arc:challenge|25": { "acc": 0.5452218430034129, "acc_stderr": 0.014551507060836355, "acc_norm": 0.5793515358361775, "acc_norm_stderr": 0.0144262112525084 }, "harness|hellaswag|10": { "acc": 0.6090420235012945, "acc_stderr": 0.0048696773308012945, "acc_norm": 0.8078072097191794, "acc_norm_stderr": 0.0039321848438416546 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5777777777777777, "acc_stderr": 0.04266763404099582, "acc_norm": 0.5777777777777777, "acc_norm_stderr": 0.04266763404099582 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6513157894736842, "acc_stderr": 0.03878139888797611, "acc_norm": 0.6513157894736842, "acc_norm_stderr": 0.03878139888797611 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6679245283018868, "acc_stderr": 0.02898545565233439, "acc_norm": 0.6679245283018868, "acc_norm_stderr": 0.02898545565233439 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6805555555555556, "acc_stderr": 0.03899073687357335, "acc_norm": 0.6805555555555556, "acc_norm_stderr": 0.03899073687357335 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6242774566473989, "acc_stderr": 0.036928207672648664, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.036928207672648664 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.27450980392156865, "acc_stderr": 0.04440521906179328, "acc_norm": 0.27450980392156865, "acc_norm_stderr": 0.04440521906179328 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.81, "acc_stderr": 0.039427724440366234, "acc_norm": 0.81, "acc_norm_stderr": 0.039427724440366234 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5319148936170213, "acc_stderr": 0.03261936918467382, "acc_norm": 0.5319148936170213, "acc_norm_stderr": 0.03261936918467382 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.40350877192982454, "acc_stderr": 0.04615186962583703, "acc_norm": 0.40350877192982454, "acc_norm_stderr": 0.04615186962583703 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3941798941798942, "acc_stderr": 0.025167982333894143, "acc_norm": 0.3941798941798942, "acc_norm_stderr": 0.025167982333894143 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3888888888888889, "acc_stderr": 0.04360314860077459, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.04360314860077459 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7322580645161291, "acc_stderr": 0.025189006660212385, "acc_norm": 0.7322580645161291, "acc_norm_stderr": 0.025189006660212385 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5221674876847291, "acc_stderr": 0.03514528562175008, "acc_norm": 0.5221674876847291, "acc_norm_stderr": 0.03514528562175008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.61, "acc_stderr": 0.04902071300001974, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7333333333333333, "acc_stderr": 0.03453131801885416, "acc_norm": 0.7333333333333333, "acc_norm_stderr": 0.03453131801885416 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7676767676767676, "acc_stderr": 0.030088629490217487, "acc_norm": 0.7676767676767676, "acc_norm_stderr": 0.030088629490217487 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8290155440414507, "acc_stderr": 0.02717121368316453, "acc_norm": 0.8290155440414507, "acc_norm_stderr": 0.02717121368316453 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5923076923076923, "acc_stderr": 0.02491524398598785, "acc_norm": 0.5923076923076923, "acc_norm_stderr": 0.02491524398598785 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.362962962962963, "acc_stderr": 0.029318203645206865, "acc_norm": 0.362962962962963, "acc_norm_stderr": 0.029318203645206865 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6260504201680672, "acc_stderr": 0.03142946637883708, "acc_norm": 0.6260504201680672, "acc_norm_stderr": 0.03142946637883708 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.03822746937658753, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.03822746937658753 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7926605504587156, "acc_stderr": 0.017381415563608674, "acc_norm": 0.7926605504587156, "acc_norm_stderr": 0.017381415563608674 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4351851851851852, "acc_stderr": 0.03381200005643524, "acc_norm": 0.4351851851851852, "acc_norm_stderr": 0.03381200005643524 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.75, "acc_stderr": 0.03039153369274154, "acc_norm": 0.75, "acc_norm_stderr": 0.03039153369274154 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.729957805907173, "acc_stderr": 0.028900721906293433, "acc_norm": 0.729957805907173, "acc_norm_stderr": 0.028900721906293433 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6681614349775785, "acc_stderr": 0.03160295143776679, "acc_norm": 0.6681614349775785, "acc_norm_stderr": 0.03160295143776679 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7251908396946565, "acc_stderr": 0.03915345408847834, "acc_norm": 0.7251908396946565, "acc_norm_stderr": 0.03915345408847834 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990947, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990947 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7685185185185185, "acc_stderr": 0.04077494709252627, "acc_norm": 0.7685185185185185, "acc_norm_stderr": 0.04077494709252627 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6993865030674846, "acc_stderr": 0.03602511318806771, "acc_norm": 0.6993865030674846, "acc_norm_stderr": 0.03602511318806771 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.04245022486384495, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.04245022486384495 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8376068376068376, "acc_stderr": 0.02416161812798774, "acc_norm": 0.8376068376068376, "acc_norm_stderr": 0.02416161812798774 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.68, "acc_stderr": 0.046882617226215034, "acc_norm": 0.68, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7816091954022989, "acc_stderr": 0.014774358319934493, "acc_norm": 0.7816091954022989, "acc_norm_stderr": 0.014774358319934493 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.684971098265896, "acc_stderr": 0.02500931379006971, "acc_norm": 0.684971098265896, "acc_norm_stderr": 0.02500931379006971 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3642458100558659, "acc_stderr": 0.016094338768474596, "acc_norm": 0.3642458100558659, "acc_norm_stderr": 0.016094338768474596 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6797385620915033, "acc_stderr": 0.026716118380156847, "acc_norm": 0.6797385620915033, "acc_norm_stderr": 0.026716118380156847 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6655948553054662, "acc_stderr": 0.026795422327893934, "acc_norm": 0.6655948553054662, "acc_norm_stderr": 0.026795422327893934 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6512345679012346, "acc_stderr": 0.02651759772446501, "acc_norm": 0.6512345679012346, "acc_norm_stderr": 0.02651759772446501 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.43617021276595747, "acc_stderr": 0.02958345203628407, "acc_norm": 0.43617021276595747, "acc_norm_stderr": 0.02958345203628407 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.41851368970013036, "acc_stderr": 0.012599505608336461, "acc_norm": 0.41851368970013036, "acc_norm_stderr": 0.012599505608336461 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5882352941176471, "acc_stderr": 0.02989616303312547, "acc_norm": 0.5882352941176471, "acc_norm_stderr": 0.02989616303312547 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6143790849673203, "acc_stderr": 0.019691459052354032, "acc_norm": 0.6143790849673203, "acc_norm_stderr": 0.019691459052354032 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6, "acc_stderr": 0.0469237132203465, "acc_norm": 0.6, "acc_norm_stderr": 0.0469237132203465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6816326530612244, "acc_stderr": 0.029822533793982062, "acc_norm": 0.6816326530612244, "acc_norm_stderr": 0.029822533793982062 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8159203980099502, "acc_stderr": 0.027403859410786855, "acc_norm": 0.8159203980099502, "acc_norm_stderr": 0.027403859410786855 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.04020151261036847, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036847 }, "harness|hendrycksTest-virology|5": { "acc": 0.5, "acc_stderr": 0.03892494720807614, "acc_norm": 0.5, "acc_norm_stderr": 0.03892494720807614 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640038, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640038 }, "harness|truthfulqa:mc|0": { "mc1": 0.28151774785801714, "mc1_stderr": 0.01574402724825605, "mc2": 0.417959572878825, "mc2_stderr": 0.014678135159441788 }, "harness|winogrande|5": { "acc": 0.7624309392265194, "acc_stderr": 0.011961298905803159 }, "harness|gsm8k|5": { "acc": 0.2880970432145565, "acc_stderr": 0.01247446973719792 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter1
[ "region:us" ]
2024-02-09T13:46:00+00:00
{"pretty_name": "Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter1", "dataset_summary": "Dataset automatically created during the evaluation run of model [splm/zephyr-7b-sft-full-spin-peft-iter1](https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T13:43:40.129900](https://huggingface.co/datasets/open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter1/blob/main/results_2024-02-09T13-43-40.129900.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5989379231736949,\n \"acc_stderr\": 0.03303973140209912,\n \"acc_norm\": 0.6053827154272026,\n \"acc_norm_stderr\": 0.0337275433659281,\n \"mc1\": 0.28151774785801714,\n \"mc1_stderr\": 0.01574402724825605,\n \"mc2\": 0.417959572878825,\n \"mc2_stderr\": 0.014678135159441788\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5452218430034129,\n \"acc_stderr\": 0.014551507060836355,\n \"acc_norm\": 0.5793515358361775,\n \"acc_norm_stderr\": 0.0144262112525084\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6090420235012945,\n \"acc_stderr\": 0.0048696773308012945,\n \"acc_norm\": 0.8078072097191794,\n \"acc_norm_stderr\": 0.0039321848438416546\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.03878139888797611,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.03878139888797611\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.02898545565233439,\n \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.02898545565233439\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.03899073687357335,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.03899073687357335\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.04440521906179328,\n \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.04440521906179328\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467382,\n \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467382\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.04615186962583703,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.04615186962583703\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3941798941798942,\n \"acc_stderr\": 0.025167982333894143,\n \"acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.025167982333894143\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7322580645161291,\n \"acc_stderr\": 0.025189006660212385,\n \"acc_norm\": 0.7322580645161291,\n \"acc_norm_stderr\": 0.025189006660212385\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.03453131801885416,\n \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.03453131801885416\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217487,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217487\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8290155440414507,\n \"acc_stderr\": 0.02717121368316453,\n \"acc_norm\": 0.8290155440414507,\n \"acc_norm_stderr\": 0.02717121368316453\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5923076923076923,\n \"acc_stderr\": 0.02491524398598785,\n \"acc_norm\": 0.5923076923076923,\n \"acc_norm_stderr\": 0.02491524398598785\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.029318203645206865,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.029318203645206865\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6260504201680672,\n \"acc_stderr\": 0.03142946637883708,\n \"acc_norm\": 0.6260504201680672,\n \"acc_norm_stderr\": 0.03142946637883708\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7926605504587156,\n \"acc_stderr\": 0.017381415563608674,\n \"acc_norm\": 0.7926605504587156,\n \"acc_norm_stderr\": 0.017381415563608674\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4351851851851852,\n \"acc_stderr\": 0.03381200005643524,\n \"acc_norm\": 0.4351851851851852,\n \"acc_norm_stderr\": 0.03381200005643524\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.729957805907173,\n \"acc_stderr\": 0.028900721906293433,\n \"acc_norm\": 0.729957805907173,\n \"acc_norm_stderr\": 0.028900721906293433\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.03915345408847834,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.03915345408847834\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6993865030674846,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.6993865030674846,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8376068376068376,\n \"acc_stderr\": 0.02416161812798774,\n \"acc_norm\": 0.8376068376068376,\n \"acc_norm_stderr\": 0.02416161812798774\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7816091954022989,\n \"acc_stderr\": 0.014774358319934493,\n \"acc_norm\": 0.7816091954022989,\n \"acc_norm_stderr\": 0.014774358319934493\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.02500931379006971,\n \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.02500931379006971\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3642458100558659,\n \"acc_stderr\": 0.016094338768474596,\n \"acc_norm\": 0.3642458100558659,\n \"acc_norm_stderr\": 0.016094338768474596\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.026716118380156847,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.026716118380156847\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6655948553054662,\n \"acc_stderr\": 0.026795422327893934,\n \"acc_norm\": 0.6655948553054662,\n \"acc_norm_stderr\": 0.026795422327893934\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6512345679012346,\n \"acc_stderr\": 0.02651759772446501,\n \"acc_norm\": 0.6512345679012346,\n \"acc_norm_stderr\": 0.02651759772446501\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.43617021276595747,\n \"acc_stderr\": 0.02958345203628407,\n \"acc_norm\": 0.43617021276595747,\n \"acc_norm_stderr\": 0.02958345203628407\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41851368970013036,\n \"acc_stderr\": 0.012599505608336461,\n \"acc_norm\": 0.41851368970013036,\n \"acc_norm_stderr\": 0.012599505608336461\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.02989616303312547,\n \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.02989616303312547\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6143790849673203,\n \"acc_stderr\": 0.019691459052354032,\n \"acc_norm\": 0.6143790849673203,\n \"acc_norm_stderr\": 0.019691459052354032\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.0469237132203465,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.0469237132203465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6816326530612244,\n \"acc_stderr\": 0.029822533793982062,\n \"acc_norm\": 0.6816326530612244,\n \"acc_norm_stderr\": 0.029822533793982062\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8159203980099502,\n \"acc_stderr\": 0.027403859410786855,\n \"acc_norm\": 0.8159203980099502,\n \"acc_norm_stderr\": 0.027403859410786855\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036847,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036847\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.03892494720807614,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.03892494720807614\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.28151774785801714,\n \"mc1_stderr\": 0.01574402724825605,\n \"mc2\": 0.417959572878825,\n \"mc2_stderr\": 0.014678135159441788\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7624309392265194,\n \"acc_stderr\": 0.011961298905803159\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2880970432145565,\n \"acc_stderr\": 0.01247446973719792\n }\n}\n```", "repo_url": "https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-43-40.129900.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["**/details_harness|winogrande|5_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T13-43-40.129900.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T13_43_40.129900", "path": ["results_2024-02-09T13-43-40.129900.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T13-43-40.129900.parquet"]}]}]}
2024-02-09T13:46:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter1 Dataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter1 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T13:43:40.129900(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter1\n\n\n\nDataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:43:40.129900(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter1\n\n\n\nDataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:43:40.129900(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
1322115809ac6df800a2c21fed3806441f3b92bf
# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter2 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [splm/zephyr-7b-sft-full-spin-peft-iter2](https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter2", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T13:50:16.813938](https://huggingface.co/datasets/open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter2/blob/main/results_2024-02-09T13-50-16.813938.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5987875444384797, "acc_stderr": 0.03303710140961152, "acc_norm": 0.6052818921495978, "acc_norm_stderr": 0.03372605811236996, "mc1": 0.28151774785801714, "mc1_stderr": 0.01574402724825605, "mc2": 0.4178934110711531, "mc2_stderr": 0.014676975153876327 }, "harness|arc:challenge|25": { "acc": 0.5435153583617748, "acc_stderr": 0.014555949760496442, "acc_norm": 0.5802047781569966, "acc_norm_stderr": 0.014422181226303028 }, "harness|hellaswag|10": { "acc": 0.6089424417446724, "acc_stderr": 0.004869899297734548, "acc_norm": 0.8077076279625572, "acc_norm_stderr": 0.0039329609740080766 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5777777777777777, "acc_stderr": 0.04266763404099582, "acc_norm": 0.5777777777777777, "acc_norm_stderr": 0.04266763404099582 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6513157894736842, "acc_stderr": 0.03878139888797611, "acc_norm": 0.6513157894736842, "acc_norm_stderr": 0.03878139888797611 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6679245283018868, "acc_stderr": 0.02898545565233439, "acc_norm": 0.6679245283018868, "acc_norm_stderr": 0.02898545565233439 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6805555555555556, "acc_stderr": 0.03899073687357335, "acc_norm": 0.6805555555555556, "acc_norm_stderr": 0.03899073687357335 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6242774566473989, "acc_stderr": 0.036928207672648664, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.036928207672648664 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.27450980392156865, "acc_stderr": 0.04440521906179328, "acc_norm": 0.27450980392156865, "acc_norm_stderr": 0.04440521906179328 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.81, "acc_stderr": 0.039427724440366234, "acc_norm": 0.81, "acc_norm_stderr": 0.039427724440366234 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5319148936170213, "acc_stderr": 0.03261936918467382, "acc_norm": 0.5319148936170213, "acc_norm_stderr": 0.03261936918467382 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.40350877192982454, "acc_stderr": 0.046151869625837026, "acc_norm": 0.40350877192982454, "acc_norm_stderr": 0.046151869625837026 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3941798941798942, "acc_stderr": 0.025167982333894143, "acc_norm": 0.3941798941798942, "acc_norm_stderr": 0.025167982333894143 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3888888888888889, "acc_stderr": 0.04360314860077459, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.04360314860077459 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7322580645161291, "acc_stderr": 0.025189006660212385, "acc_norm": 0.7322580645161291, "acc_norm_stderr": 0.025189006660212385 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5221674876847291, "acc_stderr": 0.03514528562175008, "acc_norm": 0.5221674876847291, "acc_norm_stderr": 0.03514528562175008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145633, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7333333333333333, "acc_stderr": 0.03453131801885416, "acc_norm": 0.7333333333333333, "acc_norm_stderr": 0.03453131801885416 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7676767676767676, "acc_stderr": 0.030088629490217487, "acc_norm": 0.7676767676767676, "acc_norm_stderr": 0.030088629490217487 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8290155440414507, "acc_stderr": 0.02717121368316453, "acc_norm": 0.8290155440414507, "acc_norm_stderr": 0.02717121368316453 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5897435897435898, "acc_stderr": 0.0249393139069408, "acc_norm": 0.5897435897435898, "acc_norm_stderr": 0.0249393139069408 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.362962962962963, "acc_stderr": 0.029318203645206865, "acc_norm": 0.362962962962963, "acc_norm_stderr": 0.029318203645206865 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6260504201680672, "acc_stderr": 0.03142946637883708, "acc_norm": 0.6260504201680672, "acc_norm_stderr": 0.03142946637883708 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.03822746937658753, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.03822746937658753 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7926605504587156, "acc_stderr": 0.017381415563608674, "acc_norm": 0.7926605504587156, "acc_norm_stderr": 0.017381415563608674 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4351851851851852, "acc_stderr": 0.03381200005643524, "acc_norm": 0.4351851851851852, "acc_norm_stderr": 0.03381200005643524 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7450980392156863, "acc_stderr": 0.030587591351604246, "acc_norm": 0.7450980392156863, "acc_norm_stderr": 0.030587591351604246 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.729957805907173, "acc_stderr": 0.028900721906293433, "acc_norm": 0.729957805907173, "acc_norm_stderr": 0.028900721906293433 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6681614349775785, "acc_stderr": 0.03160295143776679, "acc_norm": 0.6681614349775785, "acc_norm_stderr": 0.03160295143776679 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7251908396946565, "acc_stderr": 0.03915345408847834, "acc_norm": 0.7251908396946565, "acc_norm_stderr": 0.03915345408847834 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990947, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990947 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7685185185185185, "acc_stderr": 0.04077494709252627, "acc_norm": 0.7685185185185185, "acc_norm_stderr": 0.04077494709252627 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6993865030674846, "acc_stderr": 0.03602511318806771, "acc_norm": 0.6993865030674846, "acc_norm_stderr": 0.03602511318806771 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.04245022486384495, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.04245022486384495 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8376068376068376, "acc_stderr": 0.02416161812798774, "acc_norm": 0.8376068376068376, "acc_norm_stderr": 0.02416161812798774 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.68, "acc_stderr": 0.046882617226215034, "acc_norm": 0.68, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7816091954022989, "acc_stderr": 0.014774358319934493, "acc_norm": 0.7816091954022989, "acc_norm_stderr": 0.014774358319934493 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.684971098265896, "acc_stderr": 0.02500931379006971, "acc_norm": 0.684971098265896, "acc_norm_stderr": 0.02500931379006971 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.36089385474860336, "acc_stderr": 0.016062290671110462, "acc_norm": 0.36089385474860336, "acc_norm_stderr": 0.016062290671110462 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6797385620915033, "acc_stderr": 0.026716118380156847, "acc_norm": 0.6797385620915033, "acc_norm_stderr": 0.026716118380156847 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6655948553054662, "acc_stderr": 0.026795422327893934, "acc_norm": 0.6655948553054662, "acc_norm_stderr": 0.026795422327893934 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6512345679012346, "acc_stderr": 0.02651759772446501, "acc_norm": 0.6512345679012346, "acc_norm_stderr": 0.02651759772446501 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.43617021276595747, "acc_stderr": 0.02958345203628407, "acc_norm": 0.43617021276595747, "acc_norm_stderr": 0.02958345203628407 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.41851368970013036, "acc_stderr": 0.012599505608336461, "acc_norm": 0.41851368970013036, "acc_norm_stderr": 0.012599505608336461 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5882352941176471, "acc_stderr": 0.02989616303312547, "acc_norm": 0.5882352941176471, "acc_norm_stderr": 0.02989616303312547 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6143790849673203, "acc_stderr": 0.019691459052354032, "acc_norm": 0.6143790849673203, "acc_norm_stderr": 0.019691459052354032 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6090909090909091, "acc_stderr": 0.046737523336702384, "acc_norm": 0.6090909090909091, "acc_norm_stderr": 0.046737523336702384 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6816326530612244, "acc_stderr": 0.029822533793982062, "acc_norm": 0.6816326530612244, "acc_norm_stderr": 0.029822533793982062 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8109452736318408, "acc_stderr": 0.02768691358801301, "acc_norm": 0.8109452736318408, "acc_norm_stderr": 0.02768691358801301 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.04020151261036847, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036847 }, "harness|hendrycksTest-virology|5": { "acc": 0.5, "acc_stderr": 0.03892494720807614, "acc_norm": 0.5, "acc_norm_stderr": 0.03892494720807614 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640038, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640038 }, "harness|truthfulqa:mc|0": { "mc1": 0.28151774785801714, "mc1_stderr": 0.01574402724825605, "mc2": 0.4178934110711531, "mc2_stderr": 0.014676975153876327 }, "harness|winogrande|5": { "acc": 0.7647987371744278, "acc_stderr": 0.011920008163650882 }, "harness|gsm8k|5": { "acc": 0.2850644427596664, "acc_stderr": 0.01243504233490401 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter2
[ "region:us" ]
2024-02-09T13:52:35+00:00
{"pretty_name": "Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter2", "dataset_summary": "Dataset automatically created during the evaluation run of model [splm/zephyr-7b-sft-full-spin-peft-iter2](https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T13:50:16.813938](https://huggingface.co/datasets/open-llm-leaderboard/details_splm__zephyr-7b-sft-full-spin-peft-iter2/blob/main/results_2024-02-09T13-50-16.813938.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5987875444384797,\n \"acc_stderr\": 0.03303710140961152,\n \"acc_norm\": 0.6052818921495978,\n \"acc_norm_stderr\": 0.03372605811236996,\n \"mc1\": 0.28151774785801714,\n \"mc1_stderr\": 0.01574402724825605,\n \"mc2\": 0.4178934110711531,\n \"mc2_stderr\": 0.014676975153876327\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5435153583617748,\n \"acc_stderr\": 0.014555949760496442,\n \"acc_norm\": 0.5802047781569966,\n \"acc_norm_stderr\": 0.014422181226303028\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6089424417446724,\n \"acc_stderr\": 0.004869899297734548,\n \"acc_norm\": 0.8077076279625572,\n \"acc_norm_stderr\": 0.0039329609740080766\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.03878139888797611,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.03878139888797611\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.02898545565233439,\n \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.02898545565233439\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.03899073687357335,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.03899073687357335\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.04440521906179328,\n \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.04440521906179328\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467382,\n \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467382\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3941798941798942,\n \"acc_stderr\": 0.025167982333894143,\n \"acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.025167982333894143\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7322580645161291,\n \"acc_stderr\": 0.025189006660212385,\n \"acc_norm\": 0.7322580645161291,\n \"acc_norm_stderr\": 0.025189006660212385\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.03453131801885416,\n \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.03453131801885416\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217487,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217487\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8290155440414507,\n \"acc_stderr\": 0.02717121368316453,\n \"acc_norm\": 0.8290155440414507,\n \"acc_norm_stderr\": 0.02717121368316453\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5897435897435898,\n \"acc_stderr\": 0.0249393139069408,\n \"acc_norm\": 0.5897435897435898,\n \"acc_norm_stderr\": 0.0249393139069408\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.029318203645206865,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.029318203645206865\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6260504201680672,\n \"acc_stderr\": 0.03142946637883708,\n \"acc_norm\": 0.6260504201680672,\n \"acc_norm_stderr\": 0.03142946637883708\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7926605504587156,\n \"acc_stderr\": 0.017381415563608674,\n \"acc_norm\": 0.7926605504587156,\n \"acc_norm_stderr\": 0.017381415563608674\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4351851851851852,\n \"acc_stderr\": 0.03381200005643524,\n \"acc_norm\": 0.4351851851851852,\n \"acc_norm_stderr\": 0.03381200005643524\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.030587591351604246,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.030587591351604246\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.729957805907173,\n \"acc_stderr\": 0.028900721906293433,\n \"acc_norm\": 0.729957805907173,\n \"acc_norm_stderr\": 0.028900721906293433\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.03915345408847834,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.03915345408847834\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6993865030674846,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.6993865030674846,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8376068376068376,\n \"acc_stderr\": 0.02416161812798774,\n \"acc_norm\": 0.8376068376068376,\n \"acc_norm_stderr\": 0.02416161812798774\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7816091954022989,\n \"acc_stderr\": 0.014774358319934493,\n \"acc_norm\": 0.7816091954022989,\n \"acc_norm_stderr\": 0.014774358319934493\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.02500931379006971,\n \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.02500931379006971\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.36089385474860336,\n \"acc_stderr\": 0.016062290671110462,\n \"acc_norm\": 0.36089385474860336,\n \"acc_norm_stderr\": 0.016062290671110462\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.026716118380156847,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.026716118380156847\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6655948553054662,\n \"acc_stderr\": 0.026795422327893934,\n \"acc_norm\": 0.6655948553054662,\n \"acc_norm_stderr\": 0.026795422327893934\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6512345679012346,\n \"acc_stderr\": 0.02651759772446501,\n \"acc_norm\": 0.6512345679012346,\n \"acc_norm_stderr\": 0.02651759772446501\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.43617021276595747,\n \"acc_stderr\": 0.02958345203628407,\n \"acc_norm\": 0.43617021276595747,\n \"acc_norm_stderr\": 0.02958345203628407\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41851368970013036,\n \"acc_stderr\": 0.012599505608336461,\n \"acc_norm\": 0.41851368970013036,\n \"acc_norm_stderr\": 0.012599505608336461\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.02989616303312547,\n \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.02989616303312547\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6143790849673203,\n \"acc_stderr\": 0.019691459052354032,\n \"acc_norm\": 0.6143790849673203,\n \"acc_norm_stderr\": 0.019691459052354032\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n \"acc_stderr\": 0.046737523336702384,\n \"acc_norm\": 0.6090909090909091,\n \"acc_norm_stderr\": 0.046737523336702384\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6816326530612244,\n \"acc_stderr\": 0.029822533793982062,\n \"acc_norm\": 0.6816326530612244,\n \"acc_norm_stderr\": 0.029822533793982062\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n \"acc_stderr\": 0.02768691358801301,\n \"acc_norm\": 0.8109452736318408,\n \"acc_norm_stderr\": 0.02768691358801301\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036847,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036847\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.03892494720807614,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.03892494720807614\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.28151774785801714,\n \"mc1_stderr\": 0.01574402724825605,\n \"mc2\": 0.4178934110711531,\n \"mc2_stderr\": 0.014676975153876327\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7647987371744278,\n \"acc_stderr\": 0.011920008163650882\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2850644427596664,\n \"acc_stderr\": 0.01243504233490401\n }\n}\n```", "repo_url": "https://huggingface.co/splm/zephyr-7b-sft-full-spin-peft-iter2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T13-50-16.813938.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["**/details_harness|winogrande|5_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T13-50-16.813938.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T13_50_16.813938", "path": ["results_2024-02-09T13-50-16.813938.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T13-50-16.813938.parquet"]}]}]}
2024-02-09T13:52:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter2 Dataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter2 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T13:50:16.813938(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter2\n\n\n\nDataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:50:16.813938(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of splm/zephyr-7b-sft-full-spin-peft-iter2\n\n\n\nDataset automatically created during the evaluation run of model splm/zephyr-7b-sft-full-spin-peft-iter2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T13:50:16.813938(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
d9e0f0b6fc4836164c05a7f6995f8303339d76d4
## DialogSumm DialogSumm is a mixture of the following dialog datasets: - [dialogsum](https://huggingface.co/datasets/knkarthick/dialogsum) - [samsum](https://huggingface.co/datasets/samsum) - [MocktaiLEngineer/qmsum-processed](https://huggingface.co/datasets/MocktaiLEngineer/qmsum-processed) - [npc-engine/light-batch-summarize-dialogue](https://huggingface.co/datasets/npc-engine/light-batch-summarize-dialogue) ## 💻 Usage ``` from datasets import load_dataset dataset = load_dataset("Isotonic/DialogSumm") ``` 🚀🚀 Next: DialogSumm + [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) + [mediasum](https://huggingface.co/datasets/ccdv/mediasum) + [EdinburghNLP/xsum](https://huggingface.co/datasets/EdinburghNLP/xsum)
Isotonic/DialogSumm
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:10K<n<100K", "language:en", "license:cc-by-nc-sa-4.0", "region:us" ]
2024-02-09T14:08:36+00:00
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["summarization", "text-generation", "text2text-generation"], "dataset_info": {"features": [{"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48177311.0, "num_examples": 52480}], "download_size": 29232356, "dataset_size": 48177311.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-10T16:27:14+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-nc-sa-4.0 #region-us
## DialogSumm DialogSumm is a mixture of the following dialog datasets: - dialogsum - samsum - MocktaiLEngineer/qmsum-processed - npc-engine/light-batch-summarize-dialogue ## Usage Next: DialogSumm + cnn_dailymail + mediasum + EdinburghNLP/xsum
[ "## DialogSumm\nDialogSumm is a mixture of the following dialog datasets:\n- dialogsum\n- samsum\n- MocktaiLEngineer/qmsum-processed\n- npc-engine/light-batch-summarize-dialogue", "## Usage\n\n Next: DialogSumm + cnn_dailymail + mediasum + EdinburghNLP/xsum" ]
[ "TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #license-cc-by-nc-sa-4.0 #region-us \n", "## DialogSumm\nDialogSumm is a mixture of the following dialog datasets:\n- dialogsum\n- samsum\n- MocktaiLEngineer/qmsum-processed\n- npc-engine/light-batch-summarize-dialogue", "## Usage\n\n Next: DialogSumm + cnn_dailymail + mediasum + EdinburghNLP/xsum" ]
8a765c6e8147a5c6ceca06a17638f74463e001c4
# Dataset Card for Evaluation run of Weyaxi/Einstein-v3-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Weyaxi/Einstein-v3-7B](https://huggingface.co/Weyaxi/Einstein-v3-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Weyaxi__Einstein-v3-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-09T14:20:50.060350](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v3-7B/blob/main/results_2024-02-09T14-20-50.060350.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6324191881027033, "acc_stderr": 0.03243554886430901, "acc_norm": 0.6363751404085887, "acc_norm_stderr": 0.033091894253237775, "mc1": 0.3488372093023256, "mc1_stderr": 0.016684419859986893, "mc2": 0.5118155053333627, "mc2_stderr": 0.014996398703517707 }, "harness|arc:challenge|25": { "acc": 0.6023890784982935, "acc_stderr": 0.014301752223279542, "acc_norm": 0.6228668941979523, "acc_norm_stderr": 0.0141633668961926 }, "harness|hellaswag|10": { "acc": 0.6344353714399522, "acc_stderr": 0.004806039039008958, "acc_norm": 0.8301135232025493, "acc_norm_stderr": 0.0037476555337545205 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6370370370370371, "acc_stderr": 0.04153948404742398, "acc_norm": 0.6370370370370371, "acc_norm_stderr": 0.04153948404742398 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6513157894736842, "acc_stderr": 0.0387813988879761, "acc_norm": 0.6513157894736842, "acc_norm_stderr": 0.0387813988879761 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.55, "acc_stderr": 0.05, "acc_norm": 0.55, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6867924528301886, "acc_stderr": 0.028544793319055326, "acc_norm": 0.6867924528301886, "acc_norm_stderr": 0.028544793319055326 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7222222222222222, "acc_stderr": 0.037455547914624555, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.037455547914624555 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.43, "acc_stderr": 0.04975698519562428, "acc_norm": 0.43, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416907, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416907 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.37254901960784315, "acc_stderr": 0.04810840148082636, "acc_norm": 0.37254901960784315, "acc_norm_stderr": 0.04810840148082636 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.8, "acc_stderr": 0.04020151261036845, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5276595744680851, "acc_stderr": 0.03263597118409769, "acc_norm": 0.5276595744680851, "acc_norm_stderr": 0.03263597118409769 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4649122807017544, "acc_stderr": 0.046920083813689104, "acc_norm": 0.4649122807017544, "acc_norm_stderr": 0.046920083813689104 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5448275862068965, "acc_stderr": 0.04149886942192117, "acc_norm": 0.5448275862068965, "acc_norm_stderr": 0.04149886942192117 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3783068783068783, "acc_stderr": 0.024976954053155243, "acc_norm": 0.3783068783068783, "acc_norm_stderr": 0.024976954053155243 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3888888888888889, "acc_stderr": 0.04360314860077459, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.04360314860077459 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6838709677419355, "acc_stderr": 0.026450874489042774, "acc_norm": 0.6838709677419355, "acc_norm_stderr": 0.026450874489042774 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.032876667586034906, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.032876667586034906 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7727272727272727, "acc_stderr": 0.029857515673386417, "acc_norm": 0.7727272727272727, "acc_norm_stderr": 0.029857515673386417 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8704663212435233, "acc_stderr": 0.024233532297758723, "acc_norm": 0.8704663212435233, "acc_norm_stderr": 0.024233532297758723 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6384615384615384, "acc_stderr": 0.024359581465396993, "acc_norm": 0.6384615384615384, "acc_norm_stderr": 0.024359581465396993 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.32222222222222224, "acc_stderr": 0.028493465091028593, "acc_norm": 0.32222222222222224, "acc_norm_stderr": 0.028493465091028593 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6512605042016807, "acc_stderr": 0.030956636328566548, "acc_norm": 0.6512605042016807, "acc_norm_stderr": 0.030956636328566548 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33774834437086093, "acc_stderr": 0.038615575462551684, "acc_norm": 0.33774834437086093, "acc_norm_stderr": 0.038615575462551684 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8201834862385321, "acc_stderr": 0.016465345467391528, "acc_norm": 0.8201834862385321, "acc_norm_stderr": 0.016465345467391528 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5416666666666666, "acc_stderr": 0.03398110890294636, "acc_norm": 0.5416666666666666, "acc_norm_stderr": 0.03398110890294636 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8137254901960784, "acc_stderr": 0.027325470966716312, "acc_norm": 0.8137254901960784, "acc_norm_stderr": 0.027325470966716312 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7932489451476793, "acc_stderr": 0.0263616516683891, "acc_norm": 0.7932489451476793, "acc_norm_stderr": 0.0263616516683891 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7480916030534351, "acc_stderr": 0.03807387116306085, "acc_norm": 0.7480916030534351, "acc_norm_stderr": 0.03807387116306085 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098825, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098825 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7962962962962963, "acc_stderr": 0.03893542518824847, "acc_norm": 0.7962962962962963, "acc_norm_stderr": 0.03893542518824847 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7730061349693251, "acc_stderr": 0.03291099578615771, "acc_norm": 0.7730061349693251, "acc_norm_stderr": 0.03291099578615771 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.49107142857142855, "acc_stderr": 0.04745033255489123, "acc_norm": 0.49107142857142855, "acc_norm_stderr": 0.04745033255489123 }, "harness|hendrycksTest-management|5": { "acc": 0.7475728155339806, "acc_stderr": 0.04301250399690878, "acc_norm": 0.7475728155339806, "acc_norm_stderr": 0.04301250399690878 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8803418803418803, "acc_stderr": 0.021262719400406974, "acc_norm": 0.8803418803418803, "acc_norm_stderr": 0.021262719400406974 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8275862068965517, "acc_stderr": 0.013507943909371802, "acc_norm": 0.8275862068965517, "acc_norm_stderr": 0.013507943909371802 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7254335260115607, "acc_stderr": 0.02402774515526502, "acc_norm": 0.7254335260115607, "acc_norm_stderr": 0.02402774515526502 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4402234636871508, "acc_stderr": 0.01660256461504993, "acc_norm": 0.4402234636871508, "acc_norm_stderr": 0.01660256461504993 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7450980392156863, "acc_stderr": 0.02495418432487991, "acc_norm": 0.7450980392156863, "acc_norm_stderr": 0.02495418432487991 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.707395498392283, "acc_stderr": 0.02583989833487798, "acc_norm": 0.707395498392283, "acc_norm_stderr": 0.02583989833487798 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6944444444444444, "acc_stderr": 0.02563082497562135, "acc_norm": 0.6944444444444444, "acc_norm_stderr": 0.02563082497562135 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4787234042553192, "acc_stderr": 0.029800481645628693, "acc_norm": 0.4787234042553192, "acc_norm_stderr": 0.029800481645628693 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4595827900912647, "acc_stderr": 0.012728446067669971, "acc_norm": 0.4595827900912647, "acc_norm_stderr": 0.012728446067669971 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6875, "acc_stderr": 0.02815637344037142, "acc_norm": 0.6875, "acc_norm_stderr": 0.02815637344037142 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6486928104575164, "acc_stderr": 0.019312676065786554, "acc_norm": 0.6486928104575164, "acc_norm_stderr": 0.019312676065786554 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7428571428571429, "acc_stderr": 0.02797982353874455, "acc_norm": 0.7428571428571429, "acc_norm_stderr": 0.02797982353874455 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7562189054726368, "acc_stderr": 0.030360490154014635, "acc_norm": 0.7562189054726368, "acc_norm_stderr": 0.030360490154014635 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.88, "acc_stderr": 0.03265986323710906, "acc_norm": 0.88, "acc_norm_stderr": 0.03265986323710906 }, "harness|hendrycksTest-virology|5": { "acc": 0.5060240963855421, "acc_stderr": 0.03892212195333045, "acc_norm": 0.5060240963855421, "acc_norm_stderr": 0.03892212195333045 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8304093567251462, "acc_stderr": 0.02878210810540171, "acc_norm": 0.8304093567251462, "acc_norm_stderr": 0.02878210810540171 }, "harness|truthfulqa:mc|0": { "mc1": 0.3488372093023256, "mc1_stderr": 0.016684419859986893, "mc2": 0.5118155053333627, "mc2_stderr": 0.014996398703517707 }, "harness|winogrande|5": { "acc": 0.7995264404104183, "acc_stderr": 0.011251958281205083 }, "harness|gsm8k|5": { "acc": 0.44806671721000757, "acc_stderr": 0.013697992668274523 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_PulsarAI__Einstein-v3-7B
[ "region:us" ]
2024-02-09T14:23:03+00:00
{"pretty_name": "Evaluation run of Weyaxi/Einstein-v3-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/Einstein-v3-7B](https://huggingface.co/Weyaxi/Einstein-v3-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__Einstein-v3-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-09T14:20:50.060350](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v3-7B/blob/main/results_2024-02-09T14-20-50.060350.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6324191881027033,\n \"acc_stderr\": 0.03243554886430901,\n \"acc_norm\": 0.6363751404085887,\n \"acc_norm_stderr\": 0.033091894253237775,\n \"mc1\": 0.3488372093023256,\n \"mc1_stderr\": 0.016684419859986893,\n \"mc2\": 0.5118155053333627,\n \"mc2_stderr\": 0.014996398703517707\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6023890784982935,\n \"acc_stderr\": 0.014301752223279542,\n \"acc_norm\": 0.6228668941979523,\n \"acc_norm_stderr\": 0.0141633668961926\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6344353714399522,\n \"acc_stderr\": 0.004806039039008958,\n \"acc_norm\": 0.8301135232025493,\n \"acc_norm_stderr\": 0.0037476555337545205\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.0387813988879761,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.0387813988879761\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416907,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416907\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082636,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082636\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3783068783068783,\n \"acc_stderr\": 0.024976954053155243,\n \"acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.024976954053155243\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6838709677419355,\n \"acc_stderr\": 0.026450874489042774,\n \"acc_norm\": 0.6838709677419355,\n \"acc_norm_stderr\": 0.026450874489042774\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.032876667586034906,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.032876667586034906\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386417,\n \"acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386417\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758723,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758723\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6384615384615384,\n \"acc_stderr\": 0.024359581465396993,\n \"acc_norm\": 0.6384615384615384,\n \"acc_norm_stderr\": 0.024359581465396993\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566548,\n \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566548\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8201834862385321,\n \"acc_stderr\": 0.016465345467391528,\n \"acc_norm\": 0.8201834862385321,\n \"acc_norm_stderr\": 0.016465345467391528\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.027325470966716312,\n \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.027325470966716312\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7932489451476793,\n \"acc_stderr\": 0.0263616516683891,\n \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.0263616516683891\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306085,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306085\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098825,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098825\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615771,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406974,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406974\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8275862068965517,\n \"acc_stderr\": 0.013507943909371802,\n \"acc_norm\": 0.8275862068965517,\n \"acc_norm_stderr\": 0.013507943909371802\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.02402774515526502,\n \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.02402774515526502\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4402234636871508,\n \"acc_stderr\": 0.01660256461504993,\n \"acc_norm\": 0.4402234636871508,\n \"acc_norm_stderr\": 0.01660256461504993\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.02495418432487991,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.02495418432487991\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.02563082497562135,\n \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.02563082497562135\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4595827900912647,\n \"acc_stderr\": 0.012728446067669971,\n \"acc_norm\": 0.4595827900912647,\n \"acc_norm_stderr\": 0.012728446067669971\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6486928104575164,\n \"acc_stderr\": 0.019312676065786554,\n \"acc_norm\": 0.6486928104575164,\n \"acc_norm_stderr\": 0.019312676065786554\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7562189054726368,\n \"acc_stderr\": 0.030360490154014635,\n \"acc_norm\": 0.7562189054726368,\n \"acc_norm_stderr\": 0.030360490154014635\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3488372093023256,\n \"mc1_stderr\": 0.016684419859986893,\n \"mc2\": 0.5118155053333627,\n \"mc2_stderr\": 0.014996398703517707\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7995264404104183,\n \"acc_stderr\": 0.011251958281205083\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.44806671721000757,\n \"acc_stderr\": 0.013697992668274523\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/Einstein-v3-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|arc:challenge|25_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|gsm8k|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hellaswag|10_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-09T14-20-50.060350.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["**/details_harness|winogrande|5_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-09T14-20-50.060350.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_09T14_20_50.060350", "path": ["results_2024-02-09T14-20-50.060350.parquet"]}, {"split": "latest", "path": ["results_2024-02-09T14-20-50.060350.parquet"]}]}]}
2024-02-12T09:52:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Weyaxi/Einstein-v3-7B Dataset automatically created during the evaluation run of model Weyaxi/Einstein-v3-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-02-09T14:20:50.060350(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Weyaxi/Einstein-v3-7B\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/Einstein-v3-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T14:20:50.060350(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Weyaxi/Einstein-v3-7B\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/Einstein-v3-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-02-09T14:20:50.060350(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]