Open-Assistant Falcon 7B SFT MIX Model

This model is a fine-tuning of TII's Falcon 7B LLM. It was trained on a mixture of OASST top-2 threads (exported on June 2, 2023), Dolly-15k and synthetic instruction datasets (see dataset configuration below).

Model Details

Prompting

Two special tokens are used to mark the beginning of user and assistant turns: <|prompter|> and <|assistant|>. Each turn ends with a <|endoftext|> token.

Input prompt example:

<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>

The input ends with the <|assistant|> token to signal that the model should start generating the assistant reply.

Sample Code

from transformers import AutoTokenizer
import transformers
import torch

model = "OpenAssistant/falcon-7b-sft-mix-2000"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)

input_text="<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"

sequences = pipeline(
    input_text,
    max_length=500,
    do_sample=True,
    return_full_text=False,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Configuration Details

Model:

falcon-7b:
  dtype: bf16
  log_dir: "falcon_log_7b"
  learning_rate: 1e-5
  model_name: "tiiuae/falcon-7b"
  deepspeed_config: configs/zero_config.json
  output_dir: falcon
  weight_decay: 0.0
  max_length: 2048
  warmup_steps: 20
  gradient_checkpointing: true
  gradient_accumulation_steps: 4
  per_device_train_batch_size: 4
  per_device_eval_batch_size: 8
  eval_steps: 100
  save_steps: 500
  save_strategy: steps
  num_train_epochs: 8
  save_total_limit: 4
  residual_dropout: 0.2
  residual_dropout_lima: true

Dataset:

sft9-stage2:
  # oasst_export: 100.00% (29899)
  # vicuna: 50.00% (16963)
  # code_alpaca: 50.00% (9510)
  # oa_wiki_qa_bart_10000row: 100.00% (9434)
  # grade_school_math_instructions: 100.00% (8351)
  # dolly15k: 100.00% (14250)

  use_custom_sampler: true
  datasets:
    - oasst_export:
        lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
        input_file_path: 2023-06-02_oasst_all_labels.jsonl.gz
        val_split: 0.05
        top_k: 2
    - vicuna:
        fraction: 0.5
        val_split: 0.025
        max_val_set: 250
    - code_alpaca:
        fraction: 0.5
        val_split: 0.05
        max_val_set: 250
    - oa_wiki_qa_bart_10000row:
        val_split: 0.05
        max_val_set: 250
    - grade_school_math_instructions:
        val_split: 0.05
    - dolly15k:
        val_split: 0.05
        max_val_set: 300
Downloads last month
55
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Model tree for OpenAssistant/falcon-7b-sft-mix-2000

Adapters
1 model

Dataset used to train OpenAssistant/falcon-7b-sft-mix-2000

Spaces using OpenAssistant/falcon-7b-sft-mix-2000 13