Reasoning models
Collection
Reasoning models
•
9 items
•
Updated
•
1
This is a state-of-the-art language model optimized for neutrality, STEM proficiency, and ethical alignment. Fine-tuned Deepseek-R1-distill-llama-8b-unsloth-bnb-4bit for science, chemistry, and mathematics with reduced cultural/political bias. This large language model is open source.
pip install transformers torch
pip install accelerate
pip install -U transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EpistemeAI/Fireball-R1-Llama-3.1-8B")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Fireball-R1-Llama-3.1-8B")
prompt = "Calculate the molar mass of sulfuric acid (H₂SO₄)."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
##advance inference
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("EpistemeAI/Fireball-R1-Llama-3.1-8B")
# Load the model in 8-bit precision using bitsandbytes (requires a CUDA GPU)
model = AutoModelForCausalLM.from_pretrained(
"EpistemeAI/Fireball-R1-Llama-3.1-8B",
load_in_8bit=True, # Enable 8-bit loading to reduce memory usage
device_map="auto" # Automatically map model layers to the available device(s)
)
# Define the system prompt and the user prompt
system_prompt = "You are a highly knowledgeable assistant with expertise in chemistry and physics. <think>"
user_prompt = "Calculate the molar mass of sulfuric acid (H₂SO₄)."
# Combine the system prompt with the user prompt. The format here follows a common convention for chat-like interactions.
full_prompt = f"System: {system_prompt}\nUser: {user_prompt}\nAssistant:"
# Tokenize the combined prompt and move the inputs to the GPU
inputs = tokenizer(full_prompt, return_tensors="pt").to("cuda")
# Generate output text from the model
outputs = model.generate(**inputs, max_length=12200)
# Decode and print the result, skipping special tokens
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
outputs = model.generate(
**inputs,
max_length=300,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.2
)
Do Not Use For:
We appreciate the companies as following: Unsloth, Meta and Deepseek.
This model is licensed under [apache-2.0] - see LICENSE for details.
@misc{Fireball-R1-Llama-3.1-8B,
author = {EpistemeAI},
title = {Fireball-R1-8B: A Neutral, Science-Optimized Language Model},
year = {2025},
url = {https://huggingface.co/EpistemeAI/Fireball-R1-Llama-3.1-8B}
}
For support or feedback: contact us at [email protected]
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.