ShortKing-1.4b-v0.1 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
fb114ed verified
|
raw
history blame
4.37 kB
metadata
language:
  - en
license: cc-by-nc-4.0
datasets:
  - vicgalle/alpaca-gpt4
model-index:
  - name: ShortKingv0.1
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 34.22
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AtAndDev/ShortKingv0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 54.59
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AtAndDev/ShortKingv0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 25.78
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AtAndDev/ShortKingv0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 41.64
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AtAndDev/ShortKingv0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 56.04
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AtAndDev/ShortKingv0.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 0.45
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AtAndDev/ShortKingv0.1
          name: Open LLM Leaderboard

Model Overview

Model license: cc-by-nc-4.0
This model is trained based on EleutherAI/pythia-1.4b-deduped model that is LoRA finetuned on vicgalle/alpaca-gpt4 dataset.

Prompt Template: Alpaca

<system_prompt>

### Instruction:
<user_message>

### Response:
<assistant_response>

Intended Use

THIS IS A TEST MODEL, IT IS NOT INTENDED FOR REAL APPLICATIONS BY ANY MEANS. HOWEVER, A NEW MODEL IS COMING IN THE SAME TOPIC.
This model series will be used for small but intense applications.

Training Details

This model took 2:31:23 to train in QLoRA on a single T4 GPU.

  • epochs: 1
  • train batch size: 12
  • eval batch size: 12
  • gradient accumulation steps: 1
  • maximum gradient normal: 0.3
  • learning rate: 2e-4
  • weight decay: 0.001
  • optimizer: paged_adamw_32bit
  • learning rate schedule: cosine
  • warmup ratio (linear): 0.03

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 35.45
AI2 Reasoning Challenge (25-Shot) 34.22
HellaSwag (10-Shot) 54.59
MMLU (5-Shot) 25.78
TruthfulQA (0-shot) 41.64
Winogrande (5-shot) 56.04
GSM8k (5-shot) 0.45