YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

math-doc-refining-lm - AWQ

Original model description:

license: apache-2.0 datasets: - gair-prox/RedPajama-pro language: - en base_model: - gair-prox/RedPJ-ProX-0.7B pipeline_tag: text-generation library_name: transformers tags: - llama - code

Math-doc-refining-lm

ArXiv | Code

Math-doc-refining-lm is an adapted 0.7B-ProX model, fine-tuned for doc level refining via program generation, and can be applied over math pre-training corpus such as open-web-math.

Citation

@article{zhou2024programming,
  title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
  author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
  journal={arXiv preprint arXiv:2409.17115},
  year={2024}
}
Downloads last month
4
Safetensors
Model size
187M params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.