KoLama: Fine-Tuned Llama3.1-8B Model
Overview
KoLama is a fine-tuned version of the Meta-Llama-3.1-8B-bnb-4bit model, developed by Neetree. This model was trained using the Unsloth library, which significantly accelerated the training process, and Huggingface's TRL (Transformer Reinforcement Learning) library. The model is optimized for text generation tasks and is licensed under Apache-2.0.
Model Details
- Base Model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
- Fine-Tuned by: Neetree
- License: Apache-2.0
- Language: English
- Training Dataset: Neetree/raw_enko_opus_CCM
Key Features
- Efficient Training: The model was trained 2x faster using Unsloth, making the fine-tuning process more efficient.
- Text Generation: Optimized for text generation tasks, leveraging the power of the Llama3.1 architecture.
- Reinforcement Learning: Fine-tuned using Huggingface's TRL library, which incorporates reinforcement learning techniques to improve model performance.
Usage
To use KoLama for text generation, you can load the model using the transformers
library:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Neetree/KoLama"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_text = "Once upon a time"
inputs = tokenizer(input_text, return_tensors="pt")
# Generate text
outputs = model.generate(**inputs, max_length=50)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Training Details
- Training Speed: 2x faster training using Unsloth.
- Fine-Tuning Method: Supervised Fine-Tuning (SFT) with reinforcement learning via Huggingface's TRL library.
- Dataset: The model was fine-tuned on the Neetree/raw_enko_opus_CCM dataset, which contains English-Korean parallel text data.
License
This model is licensed under the Apache-2.0 license. For more details, please refer to the LICENSE file.
Acknowledgments
- Unsloth: For providing the tools to accelerate the training process.
- Huggingface: For the TRL library and the transformers framework.
- Meta: For the original Llama3.1-8B model.
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Neetree/KoLama
Base model
meta-llama/Llama-3.1-8B
Quantized
unsloth/Meta-Llama-3.1-8B-bnb-4bit