metadata
language:
- id
license: apache-2.0
tags:
- Indonesian
- Chat
- Instruct
base_model:
- meta-llama/Llama-3.2-3B-Instruct
datasets:
- NekoFi/alpaca-gpt4-indonesia-cleaned
pipeline_tag: text-generation
FinMatcha-3B-Instruct
FinMatcha is a powerful Indonesian-focused large language model (LLM) fine-tuned from the Llama-3.2-3B-Instruct base model. The model has been trained to handle a variety of conversation, with a special emphasis on understanding and generating Indonesian text.
This model has been fine-tuned on a wide array of Indonesian datasets, making it adept at handling the nuances of the Indonesian language, from formal to colloquial speech. It also supports English for bilingual applications.
Model Details
- Finetuned from model: Llama-3.2-3B-Instruct
- Dataset: NekoFi/alpaca-gpt4-indonesia-cleaned
- Model Size: 3B
- License: Apache-2.0
- Languages: Indonesian, English
How to use
Installation
To use the Finmatcha model, install the required dependencies:
pip install transformers>=4.45
Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "xMaulana/FinMatcha-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer("Bagaimanakah sebuah negara dapat terbentuk?", return_tensors="pt").to("cuda")
outputs = model.generate(inputs.input_ids,
max_new_tokens = 2048,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
temperature=0.7,
do_sample=True,
top_k=5,
top_p=0.9,
repetition_penalty=1.1
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- The model is primarily focused on the Indonesian language and may not perform as well on non-Indonesian tasks.
- As with all LLMs, cultural and contextual biases can be present.
License
The model is licensed under the Apache-2.0.
Contributing
We welcome contributions to enhance and improve Finmatcha. Feel free to open issues or submit pull requests for improvements.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 24.13 |
IFEval (0-Shot) | 75.94 |
BBH (3-Shot) | 23.27 |
MATH Lvl 5 (4-Shot) | 12.16 |
GPQA (0-shot) | 3.47 |
MuSR (0-shot) | 5.40 |
MMLU-PRO (5-shot) | 24.54 |