pad's picture
Update README.md
0de2a3d verified
|
raw
history blame
2.94 kB
metadata
base_model:
  - mistralai/Mistral-Nemo-Instruct-2407
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - mistral
  - trl
  - cot
  - guidance

fusion-guide

6ea83689-befb-498b-84b9-20ba406ca4e7.png

Model Overview

fusion-guide is an advanced AI reasoning system built on the Mistral-Nemo 12bn architecture. It employs a two-model approach to enhance its problem-solving capabilities. This method involves a "Guide" model that generates a structured, step-by-step plan to solve a given task. This plan is then passed to the primary "Response" model, which uses this guidance to craft an accurate and comprehensive response.

Model and Data

fusion-guide is fine-tuned on a custom dataset consisting of task-based prompts in both English (90%) and German (10%). The tasks vary in complexity, including scenarios designed to be challenging or unsolvable, to enhance the model's ability to handle ambiguous situations. Each training sample follows the structure: prompt => guidance, teaching the model to break down complex tasks systematically. Read a detailed description and evaluation of the model here: https://app.gitbook.com/

Prompt format

The prompt must be enclosed within <guidance_prompt>{PROMPT}</guidance_prompt> tags, following the format below:

Count the number of 'r's in the word 'strawberry,' and then write a Python script that checks if an arbitrary word contains the same number of 'r's.

Usage

fusion-guide can be used with vLLM and other Mistral-Nemo-compatible inference engines. Below is an example of how to use it with unsloth:

from unsloth import FastLanguageModel

max_seq_length = 8192 * 1  # Choose any! We auto support RoPE Scaling internally!
dtype = None  # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = False  # Use 4bit quantization to reduce memory usage. Can be False.

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="fusionbase/fusion-guide-12b-0.1",
    max_seq_length=max_seq_length,
    dtype=dtype,
    load_in_4bit=load_in_4bit
)

FastLanguageModel.for_inference(model)  # Enable native 2x faster inference

messages = [{"role": "user", "content": "<guidance_prompt>Count the number of 'r's in the word 'strawberry,' and then write a Python script that checks if an arbitrary word contains the same number of 'r's.</guidance_prompt>"}]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,  # Must add for generation
    return_tensors="pt",
).to("cuda")

outputs = model.generate(input_ids=inputs, max_new_tokens=2000, use_cache=True, early_stopping=True, temperature=0)
result = tokenizer.batch_decode(outputs)

print(result[0][len(input_data):].replace("</s>", ""))