COKAL-v1-70B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
840bbe5 verified
|
raw
history blame
4.21 kB
metadata
license: apache-2.0
model-index:
  - name: COKAL-v1-70B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 87.46
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DopeorNope/COKAL-v1-70B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 83.29
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DopeorNope/COKAL-v1-70B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 68.13
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DopeorNope/COKAL-v1-70B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 72.79
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DopeorNope/COKAL-v1-70B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 80.27
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DopeorNope/COKAL-v1-70B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 39.27
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DopeorNope/COKAL-v1-70B
          name: Open LLM Leaderboard

🐻‍❄️COKAL-v1_70B🐻‍❄️

img

Model Details

Model Developers Seungyoo Lee (DopeorNope)

Input Models input text only.

Output Models generate text only.

Model Architecture
COKAL-v1_70B is an auto-regressive 70B language model based on the LLaMA2 transformer architecture.

Base Model

Training Dataset

Training
I developed the model in an environment with A100 x 8

Implementation Code


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "DopeorNope/COKAL-v1_70B"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 71.87
AI2 Reasoning Challenge (25-Shot) 87.46
HellaSwag (10-Shot) 83.29
MMLU (5-Shot) 68.13
TruthfulQA (0-shot) 72.79
Winogrande (5-shot) 80.27
GSM8k (5-shot) 39.27