metadata
title: Submission Template
emoji: 🔥
colorFrom: yellow
colorTo: green
sdk: docker
pinned: false
Model Description
This space is dedicated to the text task of the Frugal AI Challenge. The final model employed is a Qwen2.5-3B-Instruct with LoRA adapters trained on a diverse mix of approximately 95,000 samples, encompassing both real and synthetic data. The dataset was open-sourced at MatthiasPicard/Frugal-AI-Train-Data-88k. The fine-tuned model, along with training logs, was open-sourced at MatthiasPicard/ModernBERT_frugal_88k.
To optimize inference time, the model was quantized to 8 bits to reduce memory usage and increase performance speed.
Note: The inference script includes both model and tokenizer loading. As a result, the first evaluation of our model in the submission space will consume more energy than subsequent evaluations.
Labels
- No relevant claim detected
- Global warming is not happening
- Not caused by humans
- Not bad or beneficial
- Solutions harmful/unnecessary
- Science is unreliable
- Proponents are biased
- Fossil fuels are needed