French-Alpaca-Phi-3-GGUF

French-Alpaca is a 3B params LLM model based on microsoft/Phi-3-mini-4k-instruct ,
fine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo.
The fine-tuning method is inspired from https://crfm.stanford.edu/2023/03/13/alpaca.html

This quantized f16 GGUF version can be used on a CPU device, compatible llama.cpp
Unsupported architecture by LM Studio.

Limitations

The French-Alpaca models family is a quick demonstration that a small LM ( < 8B params )
can be easily fine-tuned to specialize in a particular language. It does not have any moderation mechanisms.

  • Developed by: Jonathan Pacifico, 2024
  • Model type: LLM
  • Language(s) (NLP): French
  • License: MIT
  • Finetuned from model : microsoft/Phi-3-mini-4k-instruct
Downloads last month
10
GGUF
Model size
3.82B params
Architecture
phi3

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Dataset used to train jpacifico/French-Alpaca-Phi-3-beta-GGUF

Collection including jpacifico/French-Alpaca-Phi-3-beta-GGUF