QuantFactory/web-doc-refining-lm-GGUF
This is quantized version of gair-prox/web-doc-refining-lm created using llama.cpp
Original Model Card
Web-doc-refining-lm
Web-doc-refining-lm is an adapted 0.3B-ProX model, fine-tuned for document level refining via program generation.
Citation
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
journal={arXiv preprint arXiv:2409.17115},
year={2024}
}
- Downloads last month
- 53
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for QuantFactory/web-doc-refining-lm-GGUF
Base model
gair-prox/RedPJ-ProX-0.3B