fredericowieser's picture
Update README.md
9f90915 verified
metadata
language:
  - en
datasets:
  - mindchain/wikitext2
  - yahma/alpaca-cleaned
metrics:
  - perplexity
  - accuracy
base_model:
  - TinyLlama/TinyLlama_v1.1
model-index:
  - name: TinyLlama_v1.1_mix_wikitext_alpaca_1bit_BitDistiller_baseline
    results:
      - task:
          type: multiple-choice
          name: QA Benchmarking
        dataset:
          type: allenai/arc
          name: ARC-Challenge
          config: challenge
          split: test
        metrics:
          - type: accuracy
            name: Accuracy
            value: 0.2150170648464164
          - type: accuracy
            name: Normalized Accuracy
            value: 0.24744027303754265
      - task:
          type: multiple-choice
          name: QA Benchmarking
        dataset:
          type: hellaswag
          name: HellaSwag
          split: test
        metrics:
          - type: accuracy
            name: Accuracy
            value: 0.2568213503286198
          - type: accuracy
            name: Normalized Accuracy
            value: 0.253359888468433
      - task:
          type: multiple-choice
          name: QA Benchmarking
        dataset:
          type: piqa
          name: PIQA
          split: validation
        metrics:
          - type: accuracy
            name: Accuracy
            value: 0.5282916213275299
          - type: accuracy
            name: Normalized Accuracy
            value: 0.5027203482845702
      - task:
          type: multiple-choice
          name: QA Benchmarking
        dataset:
          type: winogrande
          name: Winogrande
          split: test
        metrics:
          - type: accuracy
            name: Accuracy
            value: 0.5122336227308603
      - task:
          type: multiple-choice
          name: QA Benchmarking
        dataset:
          type: aggregated
          name: QA-Avg
        metrics:
          - type: accuracy
            name: QA Average
            value: 0.3780991480835666

TinyLlama_v1.1_1bit_BitDistiller

This is a 1-bit quantized version of TinyLlama v1.1, trained using BitDistiller with asymmetric quantization and self-distillation (CAKLD) to optimize accuracy retention under extreme compression. The model is fine-tuned on WikiText-2 and Alpaca-cleaned datasets and evaluated on multiple-choice QA benchmarks.

Key Features:

  • 1-bit quantization for ultra-efficient inference.
  • Asymmetric weight clipping to reduce precision loss.
  • CAKLD knowledge distillation to preserve performance.
  • Tested on ARC-Challenge, HellaSwag, PIQA, and Winogrande.