modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf
|
RichardErkhov
| 2025-06-07T20:56:26Z
| 0
| 0
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-06-07T20:31:35Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
herberta_seq_128_v2 - GGUF
- Model creator: https://huggingface.co/XiaoEnn/
- Original model: https://huggingface.co/XiaoEnn/herberta_seq_128_v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [herberta_seq_128_v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q2_K.gguf) | Q2_K | 0.13GB |
| [herberta_seq_128_v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.IQ3_XS.gguf) | IQ3_XS | 0.14GB |
| [herberta_seq_128_v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.IQ3_S.gguf) | IQ3_S | 0.14GB |
| [herberta_seq_128_v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q3_K_S.gguf) | Q3_K_S | 0.14GB |
| [herberta_seq_128_v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.IQ3_M.gguf) | IQ3_M | 0.15GB |
| [herberta_seq_128_v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q3_K.gguf) | Q3_K | 0.16GB |
| [herberta_seq_128_v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q3_K_M.gguf) | Q3_K_M | 0.16GB |
| [herberta_seq_128_v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q3_K_L.gguf) | Q3_K_L | 0.18GB |
| [herberta_seq_128_v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.IQ4_XS.gguf) | IQ4_XS | 0.17GB |
| [herberta_seq_128_v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q4_0.gguf) | Q4_0 | 0.18GB |
| [herberta_seq_128_v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.IQ4_NL.gguf) | IQ4_NL | 0.18GB |
| [herberta_seq_128_v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q4_K_S.gguf) | Q4_K_S | 0.18GB |
| [herberta_seq_128_v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q4_K.gguf) | Q4_K | 0.19GB |
| [herberta_seq_128_v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q4_K_M.gguf) | Q4_K_M | 0.19GB |
| [herberta_seq_128_v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q4_1.gguf) | Q4_1 | 0.2GB |
| [herberta_seq_128_v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q5_0.gguf) | Q5_0 | 0.21GB |
| [herberta_seq_128_v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q5_K_S.gguf) | Q5_K_S | 0.21GB |
| [herberta_seq_128_v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q5_K.gguf) | Q5_K | 0.22GB |
| [herberta_seq_128_v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q5_K_M.gguf) | Q5_K_M | 0.22GB |
| [herberta_seq_128_v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q5_1.gguf) | Q5_1 | 0.23GB |
| [herberta_seq_128_v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q6_K.gguf) | Q6_K | 0.25GB |
| [herberta_seq_128_v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_128_v2-gguf/blob/main/herberta_seq_128_v2.Q8_0.gguf) | Q8_0 | 0.32GB |
Original model description:
---
tags:
- Pretrain_Model
- transformers
- TCM
- herberta
- text embeddding
license: apache-2.0
inference: true
language:
- zh
- en
base_model:
- hfl/chinese-roberta-wwm-ext
library_name: transformers
metrics:
- accuracy
new_version: XiaoEnn/herberta_seq_512_V2
---
### intrudcution
Herberta Pretrain model experimental research model developed by the Angelpro Team, focused on Development of a pre-training model for herbal medicine.Based on the chinese-roberta-wwm-ext-large model, we do the MLM task to complete the pre-training model on the data of 675 ancient books and 32 Chinese medicine textbooks, which we named herberta, where we take the front and back words of herb and Roberta and splice them together. We are committed to make a contribution to the TCM big modeling industry.
We hope it can be used:
- Encoder for Herbal Formulas, Embedding Models
- Word Embedding Model for Chinese Medicine Domain Data
- Support for a wide range of downstream TCM tasks, e.g., classification tasks, labeling tasks, etc.
### requirements
"transformers_version": "4.45.1"
```bash
pip install herberta
```
### Quickstart
#### Use Huggingface
```python
from transformers import AutoTokenizer, AutoModel
# Replace "XiaoEnn/herberta" with the Hugging Face model repository name
model_name = "XiaoEnn/herberta"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# Input text
text = "中医理论是我国传统文化的瑰宝。"
# Tokenize and prepare input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="max_length", max_length=128)
# Get the model's outputs
with torch.no_grad():
outputs = model(**inputs)
# Get the embedding (sentence-level average pooling)
sentence_embedding = outputs.last_hidden_state.mean(dim=1)
print("Embedding shape:", sentence_embedding.shape)
print("Embedding vector:", sentence_embedding)
```
#### LocalModel
```python
from herberta.embedding import TextToEmbedding
embedder = TextToEmbedding("path/to/your/model")
# Single text input
embedding = embedder.get_embeddings("This is a sample text.")
# Multiple text input
texts = ["This is a sample text.", "Another example."]
embeddings = embedder.get_embeddings(texts)
```
## Citation
If you find our work helpful, feel free to give us a cite.
```bibtex
@misc{herberta-embedding,
title = {Herberta: A Pretrain_Model for TCM_herb and downstream Tasks as Text Embedding Generation},
url = {https://github.com/15392778677/herberta},
author = {Yehan Yang,Xinhan Zheng},
month = {December},
year = {2024}
}
@article{herberta-technical-report,
title={Herberta: A Pretrain_Model for TCM_herb and downstream Tasks as Text Embedding Generation},
author={Yehan Yang,Xinhan Zheng},
institution={Beijing Angopro Technology Co., Ltd.},
year={2024},
note={Presented at the 2024 Machine Learning Applications Conference (MLAC)}
}
|
RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf
|
RichardErkhov
| 2025-06-07T20:55:42Z
| 0
| 0
| null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-06-07T20:31:37Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bge-large-en-v1.5-etical - GGUF
- Model creator: https://huggingface.co/CamiloGC93/
- Original model: https://huggingface.co/CamiloGC93/bge-large-en-v1.5-etical/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [bge-large-en-v1.5-etical.Q2_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q2_K.gguf) | Q2_K | 0.13GB |
| [bge-large-en-v1.5-etical.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.IQ3_XS.gguf) | IQ3_XS | 0.14GB |
| [bge-large-en-v1.5-etical.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.IQ3_S.gguf) | IQ3_S | 0.15GB |
| [bge-large-en-v1.5-etical.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q3_K_S.gguf) | Q3_K_S | 0.15GB |
| [bge-large-en-v1.5-etical.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.IQ3_M.gguf) | IQ3_M | 0.16GB |
| [bge-large-en-v1.5-etical.Q3_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q3_K.gguf) | Q3_K | 0.17GB |
| [bge-large-en-v1.5-etical.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q3_K_M.gguf) | Q3_K_M | 0.17GB |
| [bge-large-en-v1.5-etical.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q3_K_L.gguf) | Q3_K_L | 0.18GB |
| [bge-large-en-v1.5-etical.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.IQ4_XS.gguf) | IQ4_XS | 0.18GB |
| [bge-large-en-v1.5-etical.Q4_0.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q4_0.gguf) | Q4_0 | 0.19GB |
| [bge-large-en-v1.5-etical.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.IQ4_NL.gguf) | IQ4_NL | 0.19GB |
| [bge-large-en-v1.5-etical.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q4_K_S.gguf) | Q4_K_S | 0.19GB |
| [bge-large-en-v1.5-etical.Q4_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q4_K.gguf) | Q4_K | 0.2GB |
| [bge-large-en-v1.5-etical.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q4_K_M.gguf) | Q4_K_M | 0.2GB |
| [bge-large-en-v1.5-etical.Q4_1.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q4_1.gguf) | Q4_1 | 0.2GB |
| [bge-large-en-v1.5-etical.Q5_0.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q5_0.gguf) | Q5_0 | 0.22GB |
| [bge-large-en-v1.5-etical.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q5_K_S.gguf) | Q5_K_S | 0.22GB |
| [bge-large-en-v1.5-etical.Q5_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q5_K.gguf) | Q5_K | 0.23GB |
| [bge-large-en-v1.5-etical.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q5_K_M.gguf) | Q5_K_M | 0.23GB |
| [bge-large-en-v1.5-etical.Q5_1.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q5_1.gguf) | Q5_1 | 0.24GB |
| [bge-large-en-v1.5-etical.Q6_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q6_K.gguf) | Q6_K | 0.26GB |
| [bge-large-en-v1.5-etical.Q8_0.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-etical-gguf/blob/main/bge-large-en-v1.5-etical.Q8_0.gguf) | Q8_0 | 0.33GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/vitus48683_-_Qwen2.5-7B-ko-quant-merge-v2-8bits
|
RichardErkhov
| 2025-06-07T20:55:09Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"arxiv:2306.01708",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:52:42Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-7B-ko-quant-merge-v2 - bnb 8bits
- Model creator: https://huggingface.co/vitus48683/
- Original model: https://huggingface.co/vitus48683/Qwen2.5-7B-ko-quant-merge-v2/
Original model description:
---
base_model:
- Qwen/Qwen2.5-7B
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
- krx
license: apache-2.0
language:
- ko
---
# Qwen2.5-7B-ko-quant-merge-v2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* Qwen2.5-7B-merge-it-lora
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
|
dadsaasda/Qwen3_14B_lora_model_v1
|
dadsaasda
| 2025-06-07T20:55:08Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-07T20:54:41Z
|
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dadsaasda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mostafa8Mehrabi/llama-1b-pruned-3blocks-bi-therapy-calibration
|
Mostafa8Mehrabi
| 2025-06-07T20:54:53Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:53:47Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
18-VIDEOS-kiffy-katrinalim123-VIDEO-Link/Trending.VIDEO.Katrina.Lim.Viral.Video.Tutorial.LINK.Official
|
18-VIDEOS-kiffy-katrinalim123-VIDEO-Link
| 2025-06-07T20:54:44Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-06-07T20:54:17Z
|
<p><a rel="nofollow" href="https://viralflix.xyz/leaked/?eid">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?eid">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?eid"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
sid229/minivlm-sidd_embed-legal
|
sid229
| 2025-06-07T20:52:57Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5822",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-07T20:52:51Z
|
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5822
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: "plaintiff states that the CIA “has a practice of assigning a cut-off\
\ date to every request of the date \nthe acknowledgement letter is written.”\
\ Id. Neither of these contentions is sufficient to defeat \nsummary judgment.\
\ As to the first, and as noted above, “the agency’s failure to turn up a \n\
particular document, or mere speculation that as yet uncovered documents might\
\ exist, does not \n66"
sentences:
- Under what conditions is a contracting officer not required to consider price
as an evaluation factor?
- What practice does the CIA allegedly have according to the plaintiff?
- How did the D.C. Circuit interpret the phrase 'each authority of the Government'?
- source_sentence: "Unlike last time, it is now necessary to decide whether the Commission\
\ is an “agency” \nunder § 701(b)(1)—as noted, the Court’s jurisdiction over EPIC’s\
\ APA claims turns on this. The \nGovernment implicitly concedes that the Commission\
\ is an agency under § 701(b)(1), since it \nmistakenly reads the Court’s previous\
\ opinion as having held this. See Defs.’ Mem. at 14, 16 \n10"
sentences:
- On what date did the plaintiff submit the second FOIA request to the CIA?
- What is mistakenly read by the Government as having held the Commission to be
an agency under § 701(b)(1)?
- What court delivered the opinion mentioned in the case Sierra Club v. EPA?
- source_sentence: "Posteriormente, en armonía con el marco constitucional y \ndoctrinario\
\ previamente reseñado, el 13 de julio de 2011, nuestra \nLegislatura aprobó,\
\ la Ley del Derecho sobre la Propia Imagen o Ley \nNúm. 139-201116. Dicho precepto\
\ legal estatuye una causa de \nacción en daños y perjuicios debido al uso no\
\ autorizado de la \nimagen con fines comerciales o publicitarios. En lo que nos\
\ atañe,"
sentences:
- With which party does the Court agree regarding the first argument?
- ¿Qué establece el precepto legal mencionado en el texto?
- What does item (6) mention as needing to be addressed?
- source_sentence: "The CIA devotes a substantial portion of its briefing and the\
\ majority of the Fifth Lutz \nDeclaration to the contention that recognizing\
\ assignments would place an undue burden on the \nCIA’s FOIA administrators.\
\ See Def.’s Second 443 Mem. at 7–10; Fifth Lutz Decl. ¶¶ 5–13. In \n49 \n \n\
this vein, the CIA enumerates several ways in which “[a]ssignment of FOIA rights\
\ would"
sentences:
- Which declaration is cited regarding the contention that recognizing assignments
would burden the CIA?
- In all respects other than the adequacy of its search efforts on Count Twenty,
what decision did the court make regarding the CIA?
- What does the State Department assure regarding the material?
- source_sentence: "https://www.gsa.gov/policy-regulations/policy/acquisition-policy/acquisition-\n\
policy-library-resources#ClassDeviations (last visited Feb. 23, 2023). \n16 \n\
\ \n(3) The resultant contracts will feature individually competed task or \n\
delivery orders based on hourly rates; and \n(4) Cost or price shall be considered\
\ in conjunction with the issuance of any"
sentences:
- Who is the target audience of the policy documents mentioned in the Vaughn index?
- How will the resultant contracts feature the task or delivery orders?
- What action did the CIA refuse to take regarding the plaintiff's FOIA request?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: ModernBERT Embed base Legal Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.45440494590417313
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.49613601236476046
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5950540958268934
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6970633693972179
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.45440494590417313
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.437403400309119
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3406491499227202
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.21514683153013908
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1580370942812983
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.42542503863987635
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5377382792375064
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6768418341061307
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5706521253209004
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5062013934888738
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5513595877120989
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.401854714064915
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4435857805255023
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5115919629057187
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6043276661514683
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.401854714064915
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.38588356517259137
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.29613601236476045
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.18330757341576506
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1391035548686244
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.37802679031427094
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.47346728490468826
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5843637300360639
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.49541058126810483
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4439194327911482
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.485795962787389
name: Cosine Map@100
---
# ModernBERT Embed base Legal Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sid229/minivlm-sidd_embed-legal")
# Run inference
sentences = [
'https://www.gsa.gov/policy-regulations/policy/acquisition-policy/acquisition-\npolicy-library-resources#ClassDeviations (last visited Feb. 23, 2023). \n16 \n \n(3) The resultant contracts will feature individually competed task or \ndelivery orders based on hourly rates; and \n(4) Cost or price shall be considered in conjunction with the issuance of any',
'How will the resultant contracts feature the task or delivery orders?',
'Who is the target audience of the policy documents mentioned in the Vaughn index?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4544 |
| cosine_accuracy@3 | 0.4961 |
| cosine_accuracy@5 | 0.5951 |
| cosine_accuracy@10 | 0.6971 |
| cosine_precision@1 | 0.4544 |
| cosine_precision@3 | 0.4374 |
| cosine_precision@5 | 0.3406 |
| cosine_precision@10 | 0.2151 |
| cosine_recall@1 | 0.158 |
| cosine_recall@3 | 0.4254 |
| cosine_recall@5 | 0.5377 |
| cosine_recall@10 | 0.6768 |
| **cosine_ndcg@10** | **0.5707** |
| cosine_mrr@10 | 0.5062 |
| cosine_map@100 | 0.5514 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4019 |
| cosine_accuracy@3 | 0.4436 |
| cosine_accuracy@5 | 0.5116 |
| cosine_accuracy@10 | 0.6043 |
| cosine_precision@1 | 0.4019 |
| cosine_precision@3 | 0.3859 |
| cosine_precision@5 | 0.2961 |
| cosine_precision@10 | 0.1833 |
| cosine_recall@1 | 0.1391 |
| cosine_recall@3 | 0.378 |
| cosine_recall@5 | 0.4735 |
| cosine_recall@10 | 0.5844 |
| **cosine_ndcg@10** | **0.4954** |
| cosine_mrr@10 | 0.4439 |
| cosine_map@100 | 0.4858 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,822 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 52 tokens</li><li>mean: 91.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.09 tokens</li><li>max: 43 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------|
| <code>We explained that the “pictorial testimony” theory of authentication, in which a <br>witness with knowledge of the events depicted on the video provides testimony, is not the <br>sole method of authenticating video evidence. See id. at 21, 672 A.2d at 1119. We held <br>that, like a photograph, a video can be authenticated under the “silent witness” theory of</code> | <code>What does a witness with knowledge of the events provide in the 'pictorial testimony' theory?</code> |
| <code>mentor could bid on the single solicitation but compete for different pools under the solicitation. <br>Id. In addressing the hypothetical, the SBA noted the “same mentor could submit an offer as a <br>joint venture with one protégé for one pool and another offer as a joint venture with a second <br>protégé for a different pool on the same solicitation because they would not be deemed competitors</code> | <code>How many different protégés can a mentor work with in joint ventures under the same solicitation?</code> |
| <code>by choosing to evaluate price at the IDIQ level, GSA could retain flexibility in selecting among <br>contract types for task orders and renegotiate price at the task order level to minimize procurement <br>costs for participating agencies. <br>This Court declines to prescribe the precise methods GSA must use to restructure its Polaris</code> | <code>What action does the Court decline to take regarding GSA methods?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
256,
128
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 |
|:-------:|:------:|:-------------:|:----------------------:|:----------------------:|
| 0.8791 | 10 | 38.3373 | - | - |
| 1.0 | 12 | - | 0.4786 | 0.4162 |
| 1.7033 | 20 | 21.742 | - | - |
| 2.0 | 24 | - | 0.5532 | 0.4687 |
| 2.5275 | 30 | 18.2439 | - | - |
| 3.0 | 36 | - | 0.5690 | 0.4923 |
| 3.3516 | 40 | 16.356 | - | - |
| **4.0** | **48** | **-** | **0.5707** | **0.4954** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf
|
RichardErkhov
| 2025-06-07T20:51:58Z
| 0
| 0
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-06-07T20:31:19Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
herberta_seq_512_V2 - GGUF
- Model creator: https://huggingface.co/XiaoEnn/
- Original model: https://huggingface.co/XiaoEnn/herberta_seq_512_V2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [herberta_seq_512_V2.Q2_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q2_K.gguf) | Q2_K | 0.13GB |
| [herberta_seq_512_V2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.IQ3_XS.gguf) | IQ3_XS | 0.14GB |
| [herberta_seq_512_V2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.IQ3_S.gguf) | IQ3_S | 0.14GB |
| [herberta_seq_512_V2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q3_K_S.gguf) | Q3_K_S | 0.14GB |
| [herberta_seq_512_V2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.IQ3_M.gguf) | IQ3_M | 0.15GB |
| [herberta_seq_512_V2.Q3_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q3_K.gguf) | Q3_K | 0.16GB |
| [herberta_seq_512_V2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q3_K_M.gguf) | Q3_K_M | 0.16GB |
| [herberta_seq_512_V2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q3_K_L.gguf) | Q3_K_L | 0.18GB |
| [herberta_seq_512_V2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.IQ4_XS.gguf) | IQ4_XS | 0.17GB |
| [herberta_seq_512_V2.Q4_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q4_0.gguf) | Q4_0 | 0.18GB |
| [herberta_seq_512_V2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.IQ4_NL.gguf) | IQ4_NL | 0.18GB |
| [herberta_seq_512_V2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q4_K_S.gguf) | Q4_K_S | 0.18GB |
| [herberta_seq_512_V2.Q4_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q4_K.gguf) | Q4_K | 0.19GB |
| [herberta_seq_512_V2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q4_K_M.gguf) | Q4_K_M | 0.19GB |
| [herberta_seq_512_V2.Q4_1.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q4_1.gguf) | Q4_1 | 0.2GB |
| [herberta_seq_512_V2.Q5_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q5_0.gguf) | Q5_0 | 0.21GB |
| [herberta_seq_512_V2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q5_K_S.gguf) | Q5_K_S | 0.21GB |
| [herberta_seq_512_V2.Q5_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q5_K.gguf) | Q5_K | 0.22GB |
| [herberta_seq_512_V2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q5_K_M.gguf) | Q5_K_M | 0.22GB |
| [herberta_seq_512_V2.Q5_1.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q5_1.gguf) | Q5_1 | 0.23GB |
| [herberta_seq_512_V2.Q6_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q6_K.gguf) | Q6_K | 0.25GB |
| [herberta_seq_512_V2.Q8_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_seq_512_V2-gguf/blob/main/herberta_seq_512_V2.Q8_0.gguf) | Q8_0 | 0.32GB |
Original model description:
---
tags:
- PretrainModel
- TCM
- transformer
- herberta
- text-embedding
license: apache-2.0
language:
- zh
- en
metrics:
- accuracy
base_model:
- hfl/chinese-roberta-wwm-ext-large
new_version: XiaoEnn/herberta_seq_512_V2
inference: true
library_name: transformers
---
# Herberta: A Pretrained Model for TCM Herbal Medicine and Downstream Tasks
## Introduction
Herberta is a pre-trained model developed by the Angelpro Team, aimed at advancing the representation learning and modeling capabilities in Traditional Chinese Medicine (TCM). Built upon the **chinese-roberta-wwm-ext-large** model, Herberta leverages MLM (Masked Language Modeling) tasks to pre-train on datasets comprising **700 ancient books (538.95M)** and **48 modern Chinese medicine textbooks (54M)**, resulting in a robust model for embedding generation and TCM-specific downstream tasks.
We named the model "Herberta" by combining "Herb" and "Roberta" to signify its purpose in herbal medicine research. Herberta is ideal for applications such as:
- **Encoder for Herbal Formulas**: Generating meaningful embeddings for TCM formulations.
- **Domain-Specific Word Embedding**: Serving the Chinese medicine text domain.
- **Support for TCM Downstream Tasks**: Including classification, labeling, and more.
---
## Pretraining Experiments
### Dataset
| Data Type | Quantity | Data Size |
|------------------------|-------------|------------------|
| **Ancient TCM Books** | 700 books | ~538.95M |
| **Modern TCM Textbooks** | 48 books | ~54M |
| **Mixed-Type Dataset** | Combined dataset | ~637.8M |
### Pretrain result:
| Model | eval_accuracy | Loss/epoch_valid | Perplexity_valid |
|-----------------------|---------------|------------------|------------------|
| **herberta_seq_512_v2** | 0.9841 | 0.04367 | 1.083 |
| **herberta_seq_128_v2** | 0.9406 | 0.2877 | 1.333 |
| **herberta_seq_512_V3** | 0.755 |1.100 | 3.010 |
#### Metrics Comparison



### Pretraining Configuration
#### Ancient Books
- Pretraining Strategy: BERT-style MASK (15% tokens masked)
- Sequence Length: 512
- Batch Size: 32
- Learning Rate: `1e-5` with an epoch-based decay (`epoch * 0.1`)
- Tokenization: Sentence-based tokenization with padding for sequences <512 tokens.
---
## Downstream Task: TCM Pattern Classification
### Task Definition
Using **321 pattern descriptions** extracted from TCM internal medicine textbooks, we evaluated the classification performance on four models:
1. **Herberta_seq_512_v2**: Pretrained on 700 ancient TCM books.
2. **Herberta_seq_512_v3**: Pretrained on 48 modern TCM textbooks.
3. **Herberta_seq_128_v2**: Pretrained on 700 ancient TCM books (128-length sequences).
4. **Roberta**: Baseline model without TCM-specific pretraining.
### Training Configuration
- Max Sequence Length: 512
- Batch Size: 16
- Epochs: 30
### Results
| Model Name | Eval Accuracy | Eval F1 | Eval Precision | Eval Recall |
|--------------------------|---------------|-----------|----------------|-------------|
| **Herberta_seq_512_v2** | **0.9454** | **0.9293** | **0.9221** | **0.9454** |
| **Herberta_seq_512_v3** | 0.8989 | 0.8704 | 0.8583 | 0.8989 |
| **Herberta_seq_128_v2** | 0.8716 | 0.8443 | 0.8351 | 0.8716 |
| **Roberta** | 0.8743 | 0.8425 | 0.8311 | 0.8743 |

#### Summary
The **Herberta_seq_512_v2** model, pretrained on 700 ancient TCM books, exhibited superior performance across all evaluation metrics. This highlights the significance of domain-specific pretraining on larger and historically richer datasets for TCM applications.
---
## Quickstart
### Use Hugging Face
```python
from transformers import AutoTokenizer, AutoModel
model_name = "XiaoEnn/herberta"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# Input text
text = "中医理论是我国传统文化的瑰宝。"
# Tokenize and prepare input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="max_length", max_length=128)
# Get the model's outputs
with torch.no_grad():
outputs = model(**inputs)
# Get the embedding (sentence-level average pooling)
sentence_embedding = outputs.last_hidden_state.mean(dim=1)
print("Embedding shape:", sentence_embedding.shape)
print("Embedding vector:", sentence_embedding)
```
if you find our work helpful, feel free to give us a cite
@misc{herberta-embedding,
title = {Herberta: A Pretrained Model for TCM Herbal Medicine and Downstream Tasks as Text Embedding Generation},
url = {https://github.com/15392778677/herberta},
author = {Yehan Yang, Xinhan Zheng},
month = {December},
year = {2024}
}
@article{herberta-technical-report,
title={Herberta: A Pretrained Model for TCM Herbal Medicine and Downstream Tasks as Text Embedding Generation},
author={Yehan Yang, Xinhan Zheng},
institution={Beijing Angelpro Technology Co., Ltd.},
year={2024},
note={Presented at the 2024 Machine Learning Applications Conference (MLAC)}
}
|
RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf
|
RichardErkhov
| 2025-06-07T20:51:18Z
| 0
| 0
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-06-07T20:31:19Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
herberta_V3_Modern - GGUF
- Model creator: https://huggingface.co/XiaoEnn/
- Original model: https://huggingface.co/XiaoEnn/herberta_V3_Modern/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [herberta_V3_Modern.Q2_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q2_K.gguf) | Q2_K | 0.13GB |
| [herberta_V3_Modern.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.IQ3_XS.gguf) | IQ3_XS | 0.14GB |
| [herberta_V3_Modern.IQ3_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.IQ3_S.gguf) | IQ3_S | 0.14GB |
| [herberta_V3_Modern.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q3_K_S.gguf) | Q3_K_S | 0.14GB |
| [herberta_V3_Modern.IQ3_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.IQ3_M.gguf) | IQ3_M | 0.15GB |
| [herberta_V3_Modern.Q3_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q3_K.gguf) | Q3_K | 0.16GB |
| [herberta_V3_Modern.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q3_K_M.gguf) | Q3_K_M | 0.16GB |
| [herberta_V3_Modern.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q3_K_L.gguf) | Q3_K_L | 0.18GB |
| [herberta_V3_Modern.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.IQ4_XS.gguf) | IQ4_XS | 0.17GB |
| [herberta_V3_Modern.Q4_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q4_0.gguf) | Q4_0 | 0.18GB |
| [herberta_V3_Modern.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.IQ4_NL.gguf) | IQ4_NL | 0.18GB |
| [herberta_V3_Modern.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q4_K_S.gguf) | Q4_K_S | 0.18GB |
| [herberta_V3_Modern.Q4_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q4_K.gguf) | Q4_K | 0.19GB |
| [herberta_V3_Modern.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q4_K_M.gguf) | Q4_K_M | 0.19GB |
| [herberta_V3_Modern.Q4_1.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q4_1.gguf) | Q4_1 | 0.2GB |
| [herberta_V3_Modern.Q5_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q5_0.gguf) | Q5_0 | 0.21GB |
| [herberta_V3_Modern.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q5_K_S.gguf) | Q5_K_S | 0.21GB |
| [herberta_V3_Modern.Q5_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q5_K.gguf) | Q5_K | 0.22GB |
| [herberta_V3_Modern.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q5_K_M.gguf) | Q5_K_M | 0.22GB |
| [herberta_V3_Modern.Q5_1.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q5_1.gguf) | Q5_1 | 0.23GB |
| [herberta_V3_Modern.Q6_K.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q6_K.gguf) | Q6_K | 0.25GB |
| [herberta_V3_Modern.Q8_0.gguf](https://huggingface.co/RichardErkhov/XiaoEnn_-_herberta_V3_Modern-gguf/blob/main/herberta_V3_Modern.Q8_0.gguf) | Q8_0 | 0.32GB |
Original model description:
---
tags:
- PretrainModel
- TCM
- transformer
- herberta
- text-embedding
license: apache-2.0
language:
- zh
- en
metrics:
- accuracy
base_model:
- hfl/chinese-roberta-wwm-ext-large
new_version: XiaoEnn/herberta_seq_512_V2
inference: true
library_name: transformers
---
# Herberta: A Pretrained Model for TCM Herbal Medicine and Downstream Tasks
## Introduction
Herberta is a pre-trained model developed by the Angelpro Team, aimed at advancing the representation learning and modeling capabilities in Traditional Chinese Medicine (TCM). Built upon the **chinese-roberta-wwm-ext-large** model, Herberta leverages MLM (Masked Language Modeling) tasks to pre-train on datasets comprising **700 ancient books (538.95M)** and **48 modern Chinese medicine textbooks (54M)**, resulting in a robust model for embedding generation and TCM-specific downstream tasks.
We named the model "Herberta" by combining "Herb" and "Roberta" to signify its purpose in herbal medicine research. Herberta is ideal for applications such as:
- **Encoder for Herbal Formulas**: Generating meaningful embeddings for TCM formulations.
- **Domain-Specific Word Embedding**: Serving the Chinese medicine text domain.
- **Support for TCM Downstream Tasks**: Including classification, labeling, and more.
---
## Pretraining Experiments
### Dataset
| Data Type | Quantity | Data Size |
|------------------------|-------------|------------------|
| **Ancient TCM Books** | 700 books | ~538.95M |
| **Modern TCM Textbooks** | 48 books | ~54M |
| **Mixed-Type Dataset** | Combined dataset | ~637.8M |
### Pretrain result:
| Model | eval_accuracy | Loss/epoch_valid | Perplexity_valid |
|-----------------------|---------------|------------------|------------------|
| **herberta_seq_512_v2** | 0.9841 | 0.04367 | 1.083 |
| **herberta_seq_128_v2** | 0.9406 | 0.2877 | 1.333 |
| **herberta_seq_512_V3** | 0.755 |1.100 | 3.010 |
#### Metrics Comparison



### Pretraining Configuration
#### Modern Textbooks Version
- Pretraining Strategy: Dynamic MASK + Warmup + Linear Decay
- Sequence Length: 512
- Batch Size: 16
- Learning Rate: Warmup (10% steps) + Linear Decay (1e-5 initial rate)
- Tokenization: Continuous tokenization (512 tokens) without sentence segmentation.
---
## Downstream Task: TCM Pattern Classification
### Task Definition
Using **321 pattern descriptions** extracted from TCM internal medicine textbooks, we evaluated the classification performance on four models:
1. **Herberta_seq_512_v2**: Pretrained on 700 ancient TCM books.
2. **Herberta_seq_512_v3**: Pretrained on 48 modern TCM textbooks.
3. **Herberta_seq_128_v2**: Pretrained on 700 ancient TCM books (128-length sequences).
4. **Roberta**: Baseline model without TCM-specific pretraining.
### Training Configuration
- Max Sequence Length: 512
- Batch Size: 16
- Epochs: 30
### Results
| Model Name | Eval Accuracy | Eval F1 | Eval Precision | Eval Recall |
|--------------------------|---------------|-----------|----------------|-------------|
| **Herberta_seq_512_v2** | **0.9454** | **0.9293** | **0.9221** | **0.9454** |
| **Herberta_seq_512_v3** | 0.8989 | 0.8704 | 0.8583 | 0.8989 |
| **Herberta_seq_128_v2** | 0.8716 | 0.8443 | 0.8351 | 0.8716 |
| **Roberta** | 0.8743 | 0.8425 | 0.8311 | 0.8743 |

#### Summary
The **Herberta_seq_512_v2** model, pretrained on 700 ancient TCM books, exhibited superior performance across all evaluation metrics. This highlights the significance of domain-specific pretraining on larger and historically richer datasets for TCM applications.
---
## Quickstart
### Use Hugging Face
```python
from transformers import AutoTokenizer, AutoModel
model_name = "XiaoEnn/herberta"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# Input text
text = "中医理论是我国传统文化的瑰宝。"
# Tokenize and prepare input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="max_length", max_length=128)
# Get the model's outputs
with torch.no_grad():
outputs = model(**inputs)
# Get the embedding (sentence-level average pooling)
sentence_embedding = outputs.last_hidden_state.mean(dim=1)
print("Embedding shape:", sentence_embedding.shape)
print("Embedding vector:", sentence_embedding)
```
if you find our work helpful, feel free to give us a cite
@misc{herberta-embedding,
title = {Herberta: A Pretrained Model for TCM Herbal Medicine and Downstream Tasks as Text Embedding Generation},
url = {https://github.com/15392778677/herberta},
author = {Yehan Yang, Xinhan Zheng},
month = {December},
year = {2024}
}
@article{herberta-technical-report,
title={Herberta: A Pretrained Model for TCM Herbal Medicine and Downstream Tasks as Text Embedding Generation},
author={Yehan Yang, Xinhan Zheng},
institution={Beijing Angelpro Technology Co., Ltd.},
year={2024},
note={Presented at the 2024 Machine Learning Applications Conference (MLAC)}
}
|
RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf
|
RichardErkhov
| 2025-06-07T20:51:11Z
| 0
| 0
| null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-06-07T20:31:40Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
InsuranceSenTranV2 - GGUF
- Model creator: https://huggingface.co/sabdultawab/
- Original model: https://huggingface.co/sabdultawab/InsuranceSenTranV2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [InsuranceSenTranV2.Q2_K.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q2_K.gguf) | Q2_K | 0.13GB |
| [InsuranceSenTranV2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.IQ3_XS.gguf) | IQ3_XS | 0.14GB |
| [InsuranceSenTranV2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.IQ3_S.gguf) | IQ3_S | 0.15GB |
| [InsuranceSenTranV2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q3_K_S.gguf) | Q3_K_S | 0.15GB |
| [InsuranceSenTranV2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.IQ3_M.gguf) | IQ3_M | 0.16GB |
| [InsuranceSenTranV2.Q3_K.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q3_K.gguf) | Q3_K | 0.17GB |
| [InsuranceSenTranV2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q3_K_M.gguf) | Q3_K_M | 0.17GB |
| [InsuranceSenTranV2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q3_K_L.gguf) | Q3_K_L | 0.18GB |
| [InsuranceSenTranV2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.IQ4_XS.gguf) | IQ4_XS | 0.18GB |
| [InsuranceSenTranV2.Q4_0.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q4_0.gguf) | Q4_0 | 0.19GB |
| [InsuranceSenTranV2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.IQ4_NL.gguf) | IQ4_NL | 0.19GB |
| [InsuranceSenTranV2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q4_K_S.gguf) | Q4_K_S | 0.19GB |
| [InsuranceSenTranV2.Q4_K.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q4_K.gguf) | Q4_K | 0.2GB |
| [InsuranceSenTranV2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q4_K_M.gguf) | Q4_K_M | 0.2GB |
| [InsuranceSenTranV2.Q4_1.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q4_1.gguf) | Q4_1 | 0.2GB |
| [InsuranceSenTranV2.Q5_0.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q5_0.gguf) | Q5_0 | 0.22GB |
| [InsuranceSenTranV2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q5_K_S.gguf) | Q5_K_S | 0.22GB |
| [InsuranceSenTranV2.Q5_K.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q5_K.gguf) | Q5_K | 0.23GB |
| [InsuranceSenTranV2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q5_K_M.gguf) | Q5_K_M | 0.23GB |
| [InsuranceSenTranV2.Q5_1.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q5_1.gguf) | Q5_1 | 0.24GB |
| [InsuranceSenTranV2.Q6_K.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q6_K.gguf) | Q6_K | 0.26GB |
| [InsuranceSenTranV2.Q8_0.gguf](https://huggingface.co/RichardErkhov/sabdultawab_-_InsuranceSenTranV2-gguf/blob/main/InsuranceSenTranV2.Q8_0.gguf) | Q8_0 | 0.33GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TahaGorji/GPT2-Better-SEARCH
|
TahaGorji
| 2025-06-07T20:50:19Z
| 0
| 0
| null |
[
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"region:us"
] | null | 2025-06-07T20:39:11Z
|
---
license: mit
base_model:
- openai-community/gpt2-medium
---
# GPT2-Better-SEARCH
We Scan the **gpt2-medium**, it not good but is fast and light
we try to better it only with set settings and set prompts and not change base model
We add a **DeepSearch** for Search in Wiki or Google and get data and send data and user input into the **gpt2-medium**
### Now
Now we have a better model but nod a **Cool** Model!
### Target Project
Try to better model with setting and prompt and more without change base!
### Use
Running Chat.py for chat with new model
|
Kromtao/c06a8f8f-943d-4d9c-ada3-49aff8d7e24f
|
Kromtao
| 2025-06-07T20:49:18Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-64k",
"region:us"
] | null | 2025-06-07T17:42:35Z
|
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c06a8f8f-943d-4d9c-ada3-49aff8d7e24f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ae3970fd17710b2c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ae3970fd17710b2c_train_data.json
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
eval_batch_size: 8
eval_max_new_tokens: 128
eval_steps: 800
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Kromtao/c06a8f8f-943d-4d9c-ada3-49aff8d7e24f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
local_rank: null
logging_steps: 50
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: false
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 800
micro_batch_size: 8
mlflow_experiment_name: /ephemeral/tmp/ae3970fd17710b2c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: false
sample_packing: false
save_steps: 200
saves_per_epoch: null
seed: 9102
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c56723f5-ecbf-46f0-935b-e0db8c0b12bc
wandb_project: kr02
wandb_run: your_name
wandb_runid: c56723f5-ecbf-46f0-935b-e0db8c0b12bc
warmup_steps: 100
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c06a8f8f-943d-4d9c-ada3-49aff8d7e24f
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 9102
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 800
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.8795 |
| 2.6151 | 0.0959 | 800 | 0.6727 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
RichardErkhov/itpossible_-_JiuZhou-Instruct-v0.1-8bits
|
RichardErkhov
| 2025-06-07T20:48:46Z
| 0
| 0
| null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:45:53Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
JiuZhou-Instruct-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/itpossible/
- Original model: https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1/
Original model description:
<div align="center">
<h1>
JiuZhou: Open Foundation Language Models for Geoscience
</h1>
</div>
## 🎉 News
- [2024-12-31] **Article [JiuZhou: Open Foundation Language Models and Effective Pre-training Framework for Geoscience](https://www.tandfonline.com/doi/full/10.1080/17538947.2025.2449708) has been accepted for publication in the *International Journal of Digital Earth***. [Code and Data](https://github.com/THU-ESIS/JiuZhou).
- [2024-10-11] WeChat article: [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://mp.weixin.qq.com/s/ugJQ9tbp6Y87xA3TOWteqw).
- [2024-09-06] Released [ClimateChat](https://huggingface.co/itpossible/ClimateChat) instruct model.
- [2024-08-31] **Article [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) has been accepted for publication in the *Big Earth Data* journal**.
- [2024-08-31] Released [Chinese-Mistral-7B-Instruct-v0.2](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2) instruct model. Significant improvements in language understanding and multi-turn dialogue capabilities.
- [2024-06-30] Released [JiuZhou-Instruct-v0.2](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.2) instruct model. Significant improvements in language understanding and multi-turn dialogue capabilities.
- [2024-05-15] WeChat Article: [Chinese Vocabulary Expansion Incremental Pretraining for Large Language Models: Chinese-Mistral Released](https://mp.weixin.qq.com/s/PMQmRCZMWosWMfgKRBjLlQ).
- [2024-04-04] Released [Chinese-Mistral-7B-Instruct-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) instruct model.
- [2024-03-31] Released [Chinese-Mistral-7B-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B) base model.
- [2024-03-15] Released the base version [JiuZhou-base](https://huggingface.co/itpossible/JiuZhou-base), instruct version [JiuZhou-instruct-v0.1](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1), and [intermediate checkpoints](https://huggingface.co/itpossible).
## Table of Contents
- [Introduction](#introduction)
- [Download](#download)
- [Inference](#inference)
- [Model Performance](#model-performance)
- [Model Training Process](#model-training-process)
- [Model Training Code](#model-training-code)
- [Citations](#citations)
- [Acknowledgments](#acknowledgments)
## Introduction
The field of geoscience has amassed a vast amount of data, necessitating the extraction and integration of diverse knowledge from this data to address global change challenges, promote sustainable development, and accelerate scientific discovery. Foundation language models initially learn and integrate knowledge autonomously through self-supervised pre-training on extensive text data. Subsequently, they acquire the capability to solve geoscience problems through instruction tuning. However, when the foundational language models lack sufficient geoscience expertise, instruction tuning with relevant data can lead to the generation of content that is inconsistent with established facts. To improve the model's accuracy and practicality, a robust geoscience foundational language model is urgently needed.<br>
This study uses [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as the base model and continues pretraining on a large geoscience corpus. It also incorporates the [domain-specific large language model *pre*-pretraining framework (PreparedLLM)](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) and the "two-stage pre-adaptation pre-training" algorithm to build the geoscience large language model, JiuZhou.
## Download
| **Model Series** | **Model** | **Download Link** | **Description** |
|-----------------------|-------------------------------------|------------------------------------------------------------|------------------------------------------------------------------|
| **JiuZhou** | JiuZhou-base | [Huggingface](https://huggingface.co/itpossible/JiuZhou-base) | Base model (Rich in geoscience knowledge) |
| **JiuZhou** | JiuZhou-Instruct-v0.1 | [Huggingface](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> LoRA fine-tuned on Alpaca_GPT4 in both Chinese and English and GeoSignal |
| **JiuZhou** | JiuZhou-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> Fine-tuned with high-quality general instruction data |
| **ClimateChat** | ClimateChat | [HuggingFace](https://huggingface.co/itpossible/ClimateChat)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/ClimateChat) | Instruct model <br> Fine-tuned on JiuZhou-base for instruction following |
| **Chinese-Mistral** | Chinese-Mistral-7B | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-v0.1) | Base model |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.1 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model <br> LoRA fine-tuned with Alpaca_GPT4 in both Chinese and English |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model <br> LoRA fine-tuned with a million high-quality instructions |
| **PreparedLLM** | Prepared-Llama | [Huggingface](https://huggingface.co/itpossible/Prepared-Llama)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/PREPARED-Llama) | Base model <br> Continual pretraining with a small number of geoscience data <br> Recommended to use JiuZhou |
## Inference
Below is an example of inference code using JiuZhou-Instruct-v0.2.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model_path = "itpossible/JiuZhou-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device)
text = "What is geoscience?"
messages = [{"role": "user", "content": text}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
outputs_id = model.generate(inputs, max_new_tokens=600, do_sample=True)
outputs = tokenizer.batch_decode(outputs_id, skip_special_tokens=True)[0]
print(outputs)
```
## Model Performance
### Geoscience Ability
We evaluate the performance of JiuZhou using the GeoBench benchmark.<br>
JiuZhou outperforms GPT-3.5 in objective tasks:
<p align="center">
<br>
<img src="image/objective_score.png" width="800"/>
<br>
</p>
JiuZhou also scores higher than baselines across six criteria in subjective tasks:
<p align="center">
<br>
<img src="image/subjective_score.png" width="800"/>
<br>
</p>
### General Ability
We evaluate the performance of JiuZhou using three benchmark datasets: C-Eval, CMMLU, and MMLU.<br>
Compared to other variants of Llama and Mistral models, JiuZhou shows outstanding performance:
<p align="center">
<br>
<img src="image/general_score.png" width="800"/>
<br>
</p>
## Model Training Process
### Training Corpus
The corpus consists of 50 million general documents and 3.4 million geoscience-related documents.
<p align="center">
<br>
<img src="image/JiuZhou-Corpus.png" width="800"/>
<br>
</p>
### Training Framework
We use the JiuZhou-Framework proposed in this study.
<p align="center">
<br>
<img src="image/JiuZhou-Framework.png" width="800"/>
<br>
</p>
### Two-stage Pre-adaptation Pre-training (TSPT)
TSPT improves the efficiency of using limited geoscience data and overcomes some of the technical bottlenecks in continual pretraining for LLMs.<br>
The difference between TSPT and single-stage training algorithms:
<p align="center">
<br>
<img src="image/TSPT.png" width="800"/>
<br>
</p>
Comparison of TSPT and one-stage pre-training algorithm performance:
<p align="center">
<br>
<img src="image/TSPT_score.png" width="800"/>
<br>
</p>
## Model Training Code
We use [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to fine-tune JiuZhou.
### Project Deployment
```bash
git clone https://github.com/THU-ESIS/JiuZhou.git
cd JiuZhou
pip install -e ".[torch,metrics]"
```
### Model Training
Pre-training:
```bash
llamafactory-cli train examples/train_lora/JiuZhou_pretrain_sft.yaml
```
Instruction-tuning:
```bash
llamafactory-cli train examples/train_lora/JiuZhou_lora_sft.yaml
```
Chat with the fine-tuned JiuZhou::
```bash
llamafactory-cli chat examples/inference/JiuZhou_lora_sft.yaml
```
Merge the instruction-tuned LoRA weights with the original JiuZhou weights:
```bash
llamafactory-cli export examples/merge_lora/JiuZhou_lora_sft.yaml
```
## Citations
```bibtex
@article{chen2024preparedllm,
author = {Chen, Zhou and Lin, Ming and Wang, Zimeng and Zang, Mingrun and Bai, Yuqi},
title = {PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models},
year = {2024},
journal = {Big Earth Data},
pages = {1--24},
doi = {10.1080/20964471.2024.2396159},
url = {https://doi.org/10.1080/20964471.2024.2396159}
}
```
## Acknowledgments
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
- [OpenCompass](https://github.com/open-compass/opencompass)
- [K2](https://github.com/davendw49/k2)
- [GeoGalactica](https://github.com/geobrain-ai/geogalactica)
- [BB-GeoGPT](https://github.com/AGI-GIS/BB-GeoGPT)
|
8-VIDEOS-18-sajal-malik-Viral-Videos/Original.FULL.VIDEO.sajal.malik.Viral.Video.Tutorial.Official
|
8-VIDEOS-18-sajal-malik-Viral-Videos
| 2025-06-07T20:46:41Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-06-07T20:45:52Z
|
<p><a rel="nofollow" href="https://viralflix.xyz/leaked/?eid">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?eid">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?eid"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
pirahtays/DeepSeek-R1-Distill-Qwen-7B-mlx-4Bit
|
pirahtays
| 2025-06-07T20:46:10Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-07T20:45:55Z
|
---
license: mit
library_name: transformers
tags:
- mlx
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
---
# pirahtays/DeepSeek-R1-Distill-Qwen-7B-mlx-4Bit
The Model [pirahtays/DeepSeek-R1-Distill-Qwen-7B-mlx-4Bit](https://huggingface.co/pirahtays/DeepSeek-R1-Distill-Qwen-7B-mlx-4Bit) was converted to MLX format from [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("pirahtays/DeepSeek-R1-Distill-Qwen-7B-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
armeiski/ppo-LunarLander-v2
|
armeiski
| 2025-06-07T20:46:01Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-07T20:45:41Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.13 +/- 28.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf
|
RichardErkhov
| 2025-06-07T20:45:57Z
| 0
| 0
| null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-06-07T20:31:53Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bge-large-en-v1.5-soft-skills - GGUF
- Model creator: https://huggingface.co/CamiloGC93/
- Original model: https://huggingface.co/CamiloGC93/bge-large-en-v1.5-soft-skills/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [bge-large-en-v1.5-soft-skills.Q2_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q2_K.gguf) | Q2_K | 0.13GB |
| [bge-large-en-v1.5-soft-skills.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.IQ3_XS.gguf) | IQ3_XS | 0.14GB |
| [bge-large-en-v1.5-soft-skills.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.IQ3_S.gguf) | IQ3_S | 0.15GB |
| [bge-large-en-v1.5-soft-skills.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q3_K_S.gguf) | Q3_K_S | 0.15GB |
| [bge-large-en-v1.5-soft-skills.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.IQ3_M.gguf) | IQ3_M | 0.16GB |
| [bge-large-en-v1.5-soft-skills.Q3_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q3_K.gguf) | Q3_K | 0.17GB |
| [bge-large-en-v1.5-soft-skills.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q3_K_M.gguf) | Q3_K_M | 0.17GB |
| [bge-large-en-v1.5-soft-skills.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q3_K_L.gguf) | Q3_K_L | 0.18GB |
| [bge-large-en-v1.5-soft-skills.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.IQ4_XS.gguf) | IQ4_XS | 0.18GB |
| [bge-large-en-v1.5-soft-skills.Q4_0.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q4_0.gguf) | Q4_0 | 0.19GB |
| [bge-large-en-v1.5-soft-skills.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.IQ4_NL.gguf) | IQ4_NL | 0.19GB |
| [bge-large-en-v1.5-soft-skills.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q4_K_S.gguf) | Q4_K_S | 0.19GB |
| [bge-large-en-v1.5-soft-skills.Q4_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q4_K.gguf) | Q4_K | 0.2GB |
| [bge-large-en-v1.5-soft-skills.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q4_K_M.gguf) | Q4_K_M | 0.2GB |
| [bge-large-en-v1.5-soft-skills.Q4_1.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q4_1.gguf) | Q4_1 | 0.2GB |
| [bge-large-en-v1.5-soft-skills.Q5_0.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q5_0.gguf) | Q5_0 | 0.22GB |
| [bge-large-en-v1.5-soft-skills.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q5_K_S.gguf) | Q5_K_S | 0.22GB |
| [bge-large-en-v1.5-soft-skills.Q5_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q5_K.gguf) | Q5_K | 0.23GB |
| [bge-large-en-v1.5-soft-skills.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q5_K_M.gguf) | Q5_K_M | 0.23GB |
| [bge-large-en-v1.5-soft-skills.Q5_1.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q5_1.gguf) | Q5_1 | 0.24GB |
| [bge-large-en-v1.5-soft-skills.Q6_K.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q6_K.gguf) | Q6_K | 0.26GB |
| [bge-large-en-v1.5-soft-skills.Q8_0.gguf](https://huggingface.co/RichardErkhov/CamiloGC93_-_bge-large-en-v1.5-soft-skills-gguf/blob/main/bge-large-en-v1.5-soft-skills.Q8_0.gguf) | Q8_0 | 0.33GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Topg222/Oussama
|
Topg222
| 2025-06-07T20:45:42Z
| 0
| 0
| null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-06-07T20:45:42Z
|
---
license: artistic-2.0
---
|
RichardErkhov/GoldenLlama_-_krx_sg_qwen2.5_7b_it_v3-8bits
|
RichardErkhov
| 2025-06-07T20:45:39Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:42:26Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
krx_sg_qwen2.5_7b_it_v3 - bnb 8bits
- Model creator: https://huggingface.co/GoldenLlama/
- Original model: https://huggingface.co/GoldenLlama/krx_sg_qwen2.5_7b_it_v3/
Original model description:
---
license: apache-2.0
language:
- ko
- en
base_model:
- unsloth/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- krx
- unsloth
- trl
- sft
---
|
RichardErkhov/futuremojo_-_test-3.1-8B-4bits
|
RichardErkhov
| 2025-06-07T20:44:34Z
| 0
| 0
| null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:42:46Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
test-3.1-8B - bnb 4bits
- Model creator: https://huggingface.co/futuremojo/
- Original model: https://huggingface.co/futuremojo/test-3.1-8B/
Original model description:
---
base_model: unsloth/Meta-Llama-3.1-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** futuremojo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-v1.2.0-8bits
|
RichardErkhov
| 2025-06-07T20:43:59Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:40:49Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5-7b-instruct-v1.2.0 - bnb 8bits
- Model creator: https://huggingface.co/homeb82784/
- Original model: https://huggingface.co/homeb82784/qwen2.5-7b-instruct-v1.2.0/
Original model description:
---
base_model: homeb82784/qwen2.5-7b-instruct-v1.2
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** homeb82784
- **License:** apache-2.0
- **Finetuned from model :** homeb82784/qwen2.5-7b-instruct-v1.2
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
glif-loradex-trainer/an303042_Grit_Portrait
|
glif-loradex-trainer
| 2025-06-07T20:43:42Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] |
text-to-image
| 2025-06-07T20:43:14Z
|
---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1749328850026__000003000_0.jpg
text: wounded centaur, mythical creature gr1tp0r
- output:
url: samples/1749328875261__000003000_1.jpg
text: ruins of athens, snake gr1tp0r
- output:
url: samples/1749328900565__000003000_2.jpg
text: silver vampire sword gr1tp0r
- output:
url: samples/1749328925925__000003000_3.jpg
text: gr1tp0r nun standing at busy intersection
- output:
url: samples/1749328951199__000003000_4.jpg
text: gr1tp0r dog by fire hydrant
- output:
url: samples/1749328976420__000003000_5.jpg
text: gr1tp0r, close-up portrait of a goose with a scarred beak and cracked feathers,
wearing dark scratched sunglasses, the bird's head tilted slightly forward
in a menacing posture, high-contrast lighting emphasizing texture in the feathers
and reflections on the lenses, grim expression, dramatic shadows falling across
the face, isolated against a black background
base_model: black-forest-labs/FLUX.1-dev
trigger: "gr1tp0r"
instance_prompt: "gr1tp0r"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Grit_Portrait
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `an303042`.
<Gallery />
## Trigger words
You should use `gr1tp0r` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/an303042_Grit_Portrait/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Disya/DS-R1-Qwen3-8B-ArliAI-RpR-v4-exl2-8bpw-h8
|
Disya
| 2025-06-07T20:39:59Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"base_model:ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small",
"base_model:quantized:ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] |
text-generation
| 2025-06-07T20:32:44Z
|
---
license: apache-2.0
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg
language:
- en
base_model:
- ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small
library_name: transformers
pipeline_tag: text-generation
---
# DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg" alt="clickbait" width="500">
<small>Image generated using Arli AI Image Generation https://www.arliai.com/image-generation</small>
## RpR v4 Changes:
The best RP/creative model series from ArliAI yet again. This time made based on DS-R1-0528-Qwen3-8B-Fast for a smaller memory footprint.
- Reduced repetitions and impersonation
To add to the creativity and out of the box thinking of RpR v3, a more advanced filtering method was used in order to remove examples where the LLM repeated similar phrases or talked for the user. Any repetition or impersonation cases that happens will be due to how the base QwQ model was trained, and not because of the RpR dataset.
- Increased training sequence length
The training sequence length was increased to 16K in order to help awareness and memory even on longer chats.
## RpR Series Overview: Building on RPMax with Reasoning
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series **builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series**.
RpR models use the same curated, deduplicated RP and creative writing dataset used for RPMax, with a focus on variety to ensure high creativity and minimize cross-context repetition. Users familiar with RPMax will recognize the unique, non-repetitive writing style unlike other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source reasoning model that can be easily trained, it was clear that the available instruct and creative writing reasoning datasets contains only one response per example. This is type of single response dataset used for training reasoning models causes degraded output quality in long multi-turn chats. Which is why Arli AI decided to create a real RP model capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning RP dataset by re-processing our existing known-good RPMax dataset into a reasoning dataset. This was possible by using the base QwQ Instruct model itself to create the reasoning process for every turn in the RPMax dataset conversation examples, which is then further refined in order to make sure the reasoning is in-line with the actual response examples from the dataset.
Another important thing to get right is to make sure the model is trained on examples that present reasoning blocks in the same way as it encounters it during inference. Which is, never seeing the reasoning blocks in it's context. In order to do this, the training run was completed using axolotl with manual template-free segments dataset in order to make sure that the model is never trained to see the reasoning block in the context. Just like how the model will be used during inference time.
The result of training on this dataset with this method are consistently coherent and interesting outputs even in long multi-turn RP chats. This is as far as we know the first true correctly-trained reasoning model trained for RP and creative writing.
You can access the model at https://arliai.com and we also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server https://discord.com/invite/t75KbPgwhk or on our subreddit https://www.reddit.com/r/ArliAI/
## Model Description
DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small is part of the RpR v4 series. It is a 8-billion parameter model fine-tuned using the RpR dataset based on the curated RPMax dataset combined with techniques to maintain reasoning abilities in long multi-turn chats.
### Recommended Samplers
- RpR models does not work well with repetition penalty type of samplers, even more advanced ones such as XTC or DRY.
- It works best with simple sampler settings and also being allowed to reason for a long time (high max tokens).
- You can download the ST master export uploaded in the files section of this repo as well.
Recommended to first start with:
* **Temperature**: 1.0
* **MinP**: 0.02
* **TopK**: 40
* **Response Tokens**: 2048+
### Specs
* **Base Model**: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
* **Max Context Length**: Max 128K with Yarn (Same as base QwQ it is Natively 32K)
* **Parameters**: 8B
* **Reasoning Model**: Yes
### Training Details
* **Sequence Length**: 16384
* **Epochs**: 1 epoch training (Inherited from RPMax methods)
* **Fine-tuning Method**: RS-QLORA (Rank-Stabilized LoRA)
* **Rank/Alpha**: 128-rank 128-alpha
* **Learning Rate**: 0.00003
* **Scheduler**: Constant
* **Gradient accumulation**: 32
### Very Nice Training graphs :)
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/J-cD7mjdIG58BsSPpuS6x.png" alt="Train Loss" width="600">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/T890dqrUcBYnlOzK7MXrU.png" alt="Eval Loss" width="600">
### Quantization
* **BF16**: https://huggingface.co/ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Fast
* **GGUF**: https://huggingface.co/ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Fast-GGUF
### How to use reasoning models correctly in ST
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/njVt2Vir8Isd3ApjTBmoI.png" alt="RpR ST Settings" width="600">
For any reasoning models in general, you need to make sure to set:
* Prefix is set to ONLY \<think> and the suffix is set to ONLY \</think> without any spaces or newlines (enter)
* Reply starts with \<think>
* Always add character names is unchecked
* Include names is set to never
* As always the chat template should also conform to the model being used
Note: Reasoning models work properly only if include names is set to never, since they always expect the eos token of the user turn followed by the \<think> token in order to start reasoning before outputting their response. If you set include names to enabled, then it will always append the character name at the end like "Seraphina:\<eos_token>" which confuses the model on whether it should respond or reason first.
The rest of your sampler parameters can be set as you wish as usual.
If you don't see the reasoning wrapped inside the thinking block, then either your settings is still wrong and doesn't follow my example or that your ST version is too old without reasoning block auto parsing.
If you see the whole response is in the reasoning block, then your \<think> and \</think> reasoning token suffix and prefix might have an extra space or newline. Or the model just isn't a reasoning model that is smart enough to always put reasoning in between those tokens.
### If you set everything up correctly, it should look like this:
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/wFQC8Df9dLaiQGnIg_iEo.png" alt="RpR example response" width="600">
---
<details>
<summary>Details: The RPMax Foundation (Dataset & Training Philosophy)</summary>
*The following sections detail the core philosophy behind the dataset and training methodology originally developed for RPMax, which serves as the foundation for the RpR series.*
### The Goal: Reduced Repetition and Higher Creativity
The goal of the dataset curation used for both RPMax and RpR is to reduce repetitions and increase the models ability to creatively write in different situations presented to it. What this means is it is a model that will output responses very differently without falling into predictable tropes across different situations.
### What is repetition and creativity?
First of all, creativity should mean the variety in output that the model is capable of creating. You should not confuse creativity with writing prose. When a model writes in a way that can be said to be pleasant like writers would write in a novel, this is not creative writing. This is just a model having a certain pleasant type of writing prose. So a model that writes nicely is not necessarily a creative model.
Repetition and creativity are essentially intertwined with each other, so if a model is repetitive then a model can also be said to be un-creative as it cannot write new things and can only repeat similar responses that it has created before. For repetition there are actually two very different forms of repetition.
**In-context repetition:** When people mention a model is repetitive, this usually mean a model that likes to repeat the same phrases in a single conversation. An example of this is when a model says that a character "flicks her hair and...." and then starts to prepend that "flicks her hair and..." into every other action that character does.
It can be said that the model is boring, but even in real people's writing it is possible that this kind of repetition could be intentional to subtly prove a point or showcase a character's traits in some scenarios. So this type of repetition is not always bad and completely discouraging a model from doing this does not always lead to improve a model's writing ability.
In this regard, RPMax and RpR is not yet focused on eliminating this type of repetition so there might be some in-context repetition that can be seen in the outputs. Eliminating this will be the next big step of the RPMax and RpR series of models.
**Cross-context repetition:** A second worse type of repetition is a model's tendency to repeat the same phrases or tropes in very different situations. An example is a model that likes to repeat the infamous "shivers down my spine" phrase in wildly different conversations that don't necessarily fit with that phrase.
This type of repetition is ALWAYS bad as it is a sign that the model has over-fitted into that style of "creative writing" that it has often seen in the training dataset. A model's tendency to have cross-context repetition is also usually visible in how a model likes to choose similar repetitive names when writing stories. Such as the infamous "elara" and "whispering woods" names.
The primary goal of the dataset curation for RPMax and RpR is to create a highly creative model by reducing cross-context repetition, as that is the type of repetition that follows you through different conversations. This is combated by making sure the dataset does not have repetitions of the same situations or characters in different example entries.
### Dataset Curation
The success of models trained on this dataset (including RPMax and now RpR) is thanks to the training method and the unique dataset created for fine-tuning. It contains as many open source creative writing and RP datasets that can be found (all from Hugging Face), from which have been curated to weed out datasets that are purely synthetic generations as they often only serve to dumb down the model and make the model learn GPT-isms (slop) rather than help.
Then Llama 3.1 8B (or a similarly capable model) is used to create a database of the characters and situations that are portrayed in these datasets, which is then used to de-dupe these datasets to make sure that there is only a single entry of any character or situation.
### The Golden Rule of Fine-Tuning
Unlike the initial pre-training stage where the more data you throw at it the better it becomes for the most part, the golden rule for fine-tuning models isn't quantity, but instead quality over quantity. So the dataset used here is actually orders of magnitude smaller than it would be if it included repeated characters and situations in the dataset, but the end result is a model that does not feel like just another "in-breed" of another creative writing/RP model.
### Training Parameters and Unconventional Approach
The usual way is to have a low learning rate and high gradient accumulation for better loss stability, and then run multiple epochs of the training run until the loss is acceptable.
The RPMax and RpR methodology, however, uses only **one single epoch**, a low gradient accumulation, and a higher than normal learning rate. The loss curve during training is actually unstable and jumps up and down a lot, but if it is smoothed out, it is steadily decreasing over time. The theory is that this allows the models to learn from each individual example in the dataset much more, and by not showing the model the same example twice using multiple epochs, it stops the model from latching on and reinforcing a single character or story trope.
The jumping up and down of loss during training is because as the model gets trained on a new entry from the dataset, the model will have never seen a similar example before and therefore can't really predict an answer similar to the example entry. While the relatively high end loss of 1.0 or slightly above is actually acceptable because the goal was never to create a model that can output exactly like the dataset that is being used to train it. Rather to create a model that is creative enough to make up it's own style of responses.
This is different from training a model in a particular domain and needing the model to reliably be able to output like the example dataset, such as when training a model on a company's internal knowledge base.
</details>
---
## Try It Out!
Model preference is subjective, so please do try this model for yourself. Your feedback both good and bad is always valueable and will help us improve the future RPMax and RpR models.
|
RichardErkhov/Q-PING_-_krx_Qwen_2.5_7B_it_1128_CPU_DPO-8bits
|
RichardErkhov
| 2025-06-07T20:39:46Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:37:29Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
krx_Qwen_2.5_7B_it_1128_CPU_DPO - bnb 8bits
- Model creator: https://huggingface.co/Q-PING/
- Original model: https://huggingface.co/Q-PING/krx_Qwen_2.5_7B_it_1128_CPU_DPO/
Original model description:
---
library_name: transformers
tags:
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/itpossible_-_JiuZhou-Instruct-v0.1-4bits
|
RichardErkhov
| 2025-06-07T20:39:01Z
| 0
| 0
| null |
[
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:37:04Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
JiuZhou-Instruct-v0.1 - bnb 4bits
- Model creator: https://huggingface.co/itpossible/
- Original model: https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1/
Original model description:
<div align="center">
<h1>
JiuZhou: Open Foundation Language Models for Geoscience
</h1>
</div>
## 🎉 News
- [2024-12-31] **Article [JiuZhou: Open Foundation Language Models and Effective Pre-training Framework for Geoscience](https://www.tandfonline.com/doi/full/10.1080/17538947.2025.2449708) has been accepted for publication in the *International Journal of Digital Earth***. [Code and Data](https://github.com/THU-ESIS/JiuZhou).
- [2024-10-11] WeChat article: [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://mp.weixin.qq.com/s/ugJQ9tbp6Y87xA3TOWteqw).
- [2024-09-06] Released [ClimateChat](https://huggingface.co/itpossible/ClimateChat) instruct model.
- [2024-08-31] **Article [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) has been accepted for publication in the *Big Earth Data* journal**.
- [2024-08-31] Released [Chinese-Mistral-7B-Instruct-v0.2](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2) instruct model. Significant improvements in language understanding and multi-turn dialogue capabilities.
- [2024-06-30] Released [JiuZhou-Instruct-v0.2](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.2) instruct model. Significant improvements in language understanding and multi-turn dialogue capabilities.
- [2024-05-15] WeChat Article: [Chinese Vocabulary Expansion Incremental Pretraining for Large Language Models: Chinese-Mistral Released](https://mp.weixin.qq.com/s/PMQmRCZMWosWMfgKRBjLlQ).
- [2024-04-04] Released [Chinese-Mistral-7B-Instruct-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) instruct model.
- [2024-03-31] Released [Chinese-Mistral-7B-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B) base model.
- [2024-03-15] Released the base version [JiuZhou-base](https://huggingface.co/itpossible/JiuZhou-base), instruct version [JiuZhou-instruct-v0.1](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1), and [intermediate checkpoints](https://huggingface.co/itpossible).
## Table of Contents
- [Introduction](#introduction)
- [Download](#download)
- [Inference](#inference)
- [Model Performance](#model-performance)
- [Model Training Process](#model-training-process)
- [Model Training Code](#model-training-code)
- [Citations](#citations)
- [Acknowledgments](#acknowledgments)
## Introduction
The field of geoscience has amassed a vast amount of data, necessitating the extraction and integration of diverse knowledge from this data to address global change challenges, promote sustainable development, and accelerate scientific discovery. Foundation language models initially learn and integrate knowledge autonomously through self-supervised pre-training on extensive text data. Subsequently, they acquire the capability to solve geoscience problems through instruction tuning. However, when the foundational language models lack sufficient geoscience expertise, instruction tuning with relevant data can lead to the generation of content that is inconsistent with established facts. To improve the model's accuracy and practicality, a robust geoscience foundational language model is urgently needed.<br>
This study uses [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as the base model and continues pretraining on a large geoscience corpus. It also incorporates the [domain-specific large language model *pre*-pretraining framework (PreparedLLM)](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) and the "two-stage pre-adaptation pre-training" algorithm to build the geoscience large language model, JiuZhou.
## Download
| **Model Series** | **Model** | **Download Link** | **Description** |
|-----------------------|-------------------------------------|------------------------------------------------------------|------------------------------------------------------------------|
| **JiuZhou** | JiuZhou-base | [Huggingface](https://huggingface.co/itpossible/JiuZhou-base) | Base model (Rich in geoscience knowledge) |
| **JiuZhou** | JiuZhou-Instruct-v0.1 | [Huggingface](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> LoRA fine-tuned on Alpaca_GPT4 in both Chinese and English and GeoSignal |
| **JiuZhou** | JiuZhou-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> Fine-tuned with high-quality general instruction data |
| **ClimateChat** | ClimateChat | [HuggingFace](https://huggingface.co/itpossible/ClimateChat)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/ClimateChat) | Instruct model <br> Fine-tuned on JiuZhou-base for instruction following |
| **Chinese-Mistral** | Chinese-Mistral-7B | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-v0.1) | Base model |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.1 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model <br> LoRA fine-tuned with Alpaca_GPT4 in both Chinese and English |
| **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model <br> LoRA fine-tuned with a million high-quality instructions |
| **PreparedLLM** | Prepared-Llama | [Huggingface](https://huggingface.co/itpossible/Prepared-Llama)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/PREPARED-Llama) | Base model <br> Continual pretraining with a small number of geoscience data <br> Recommended to use JiuZhou |
## Inference
Below is an example of inference code using JiuZhou-Instruct-v0.2.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
model_path = "itpossible/JiuZhou-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device)
text = "What is geoscience?"
messages = [{"role": "user", "content": text}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
outputs_id = model.generate(inputs, max_new_tokens=600, do_sample=True)
outputs = tokenizer.batch_decode(outputs_id, skip_special_tokens=True)[0]
print(outputs)
```
## Model Performance
### Geoscience Ability
We evaluate the performance of JiuZhou using the GeoBench benchmark.<br>
JiuZhou outperforms GPT-3.5 in objective tasks:
<p align="center">
<br>
<img src="image/objective_score.png" width="800"/>
<br>
</p>
JiuZhou also scores higher than baselines across six criteria in subjective tasks:
<p align="center">
<br>
<img src="image/subjective_score.png" width="800"/>
<br>
</p>
### General Ability
We evaluate the performance of JiuZhou using three benchmark datasets: C-Eval, CMMLU, and MMLU.<br>
Compared to other variants of Llama and Mistral models, JiuZhou shows outstanding performance:
<p align="center">
<br>
<img src="image/general_score.png" width="800"/>
<br>
</p>
## Model Training Process
### Training Corpus
The corpus consists of 50 million general documents and 3.4 million geoscience-related documents.
<p align="center">
<br>
<img src="image/JiuZhou-Corpus.png" width="800"/>
<br>
</p>
### Training Framework
We use the JiuZhou-Framework proposed in this study.
<p align="center">
<br>
<img src="image/JiuZhou-Framework.png" width="800"/>
<br>
</p>
### Two-stage Pre-adaptation Pre-training (TSPT)
TSPT improves the efficiency of using limited geoscience data and overcomes some of the technical bottlenecks in continual pretraining for LLMs.<br>
The difference between TSPT and single-stage training algorithms:
<p align="center">
<br>
<img src="image/TSPT.png" width="800"/>
<br>
</p>
Comparison of TSPT and one-stage pre-training algorithm performance:
<p align="center">
<br>
<img src="image/TSPT_score.png" width="800"/>
<br>
</p>
## Model Training Code
We use [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to fine-tune JiuZhou.
### Project Deployment
```bash
git clone https://github.com/THU-ESIS/JiuZhou.git
cd JiuZhou
pip install -e ".[torch,metrics]"
```
### Model Training
Pre-training:
```bash
llamafactory-cli train examples/train_lora/JiuZhou_pretrain_sft.yaml
```
Instruction-tuning:
```bash
llamafactory-cli train examples/train_lora/JiuZhou_lora_sft.yaml
```
Chat with the fine-tuned JiuZhou::
```bash
llamafactory-cli chat examples/inference/JiuZhou_lora_sft.yaml
```
Merge the instruction-tuned LoRA weights with the original JiuZhou weights:
```bash
llamafactory-cli export examples/merge_lora/JiuZhou_lora_sft.yaml
```
## Citations
```bibtex
@article{chen2024preparedllm,
author = {Chen, Zhou and Lin, Ming and Wang, Zimeng and Zang, Mingrun and Bai, Yuqi},
title = {PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models},
year = {2024},
journal = {Big Earth Data},
pages = {1--24},
doi = {10.1080/20964471.2024.2396159},
url = {https://doi.org/10.1080/20964471.2024.2396159}
}
```
## Acknowledgments
- [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
- [OpenCompass](https://github.com/open-compass/opencompass)
- [K2](https://github.com/davendw49/k2)
- [GeoGalactica](https://github.com/geobrain-ai/geogalactica)
- [BB-GeoGPT](https://github.com/AGI-GIS/BB-GeoGPT)
|
FormlessAI/62e524bb-1f26-4e29-b425-59f5e0a7cad1
|
FormlessAI
| 2025-06-07T20:38:39Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"endpoints_compatible",
"region:us"
] | null | 2025-06-07T18:06:43Z
|
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
library_name: transformers
model_name: 62e524bb-1f26-4e29-b425-59f5e0a7cad1
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for 62e524bb-1f26-4e29-b425-59f5e0a7cad1
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/62e524bb-1f26-4e29-b425-59f5e0a7cad1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/jwdqt7rz)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RichardErkhov/BarraHome_-_llama-3-orpo-v1-merged_16bit-8bits
|
RichardErkhov
| 2025-06-07T20:38:26Z
| 0
| 0
| null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:35:51Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3-orpo-v1-merged_16bit - bnb 8bits
- Model creator: https://huggingface.co/BarraHome/
- Original model: https://huggingface.co/BarraHome/llama-3-orpo-v1-merged_16bit/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- 32k
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
pipeline_tag: text-generation
---
# Uploaded model
- **Developed by:** BarraHome
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/kevin009_-_minirewrite-8bits
|
RichardErkhov
| 2025-06-07T20:38:01Z
| 0
| 0
| null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:35:48Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
minirewrite - bnb 8bits
- Model creator: https://huggingface.co/kevin009/
- Original model: https://huggingface.co/kevin009/minirewrite/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- mistral
- trl
---
# Model Card: Minimalist Assistant
## Model Details
- **Base Model**: Mistral Instruct v2
- **Tokenizer**: based on Mistral Instruction following
## Intended Use
- As Editor Assistant for revision and paraphrasing
- Avoids technical jargon in favor of clear and accessible language
## Training Data
- **Initial Training**: 14,000 conversations in minimalist style and more accessible language
- Dataset: kevin009/system-defined-sft-llama3-14k
- **Further Training**: 8,000 revision conversations to enhance rewriting and paraphrasing tasks.
## Performance and Limitations
- **Limitations**:
- May produce shorter outputs compared to original version.
- Potential biases
## Ethical Considerations
- Designed for daily use, potential biases from training data should be considered
- The model does not have implemented safety measures to prevent generation of potentially harmful or offensive content
## Additional Information
- Fine-tuned to address limitations in writing tasks observed in other models
- Personalized for everyday use cases
- Motivation for development was to create a model better suited for writing tasks, as existing models were found lacking in this area
- SFT fine-tuned model
|
RichardErkhov/vitus48683_-_Qwen2.5-7B-ko-quant-merge-v2-4bits
|
RichardErkhov
| 2025-06-07T20:37:36Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"arxiv:2306.01708",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:35:48Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-7B-ko-quant-merge-v2 - bnb 4bits
- Model creator: https://huggingface.co/vitus48683/
- Original model: https://huggingface.co/vitus48683/Qwen2.5-7B-ko-quant-merge-v2/
Original model description:
---
base_model:
- Qwen/Qwen2.5-7B
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
- krx
license: apache-2.0
language:
- ko
---
# Qwen2.5-7B-ko-quant-merge-v2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* Qwen2.5-7B-merge-it-lora
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
|
King-8/confidence_motivator
|
King-8
| 2025-06-07T20:37:11Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:32:38Z
|
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: confidence_motivator_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# confidence_motivator_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.28 | 1.0 | 225 | 0.4499 |
| 0.2207 | 2.0 | 450 | 0.4698 |
| 0.1909 | 3.0 | 675 | 0.4910 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
RichardErkhov/zisisbatzos_-_emollama-3.1-8B-r-128-8bits
|
RichardErkhov
| 2025-06-07T20:37:01Z
| 0
| 0
| null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:34:33Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
emollama-3.1-8B-r-128 - bnb 8bits
- Model creator: https://huggingface.co/zisisbatzos/
- Original model: https://huggingface.co/zisisbatzos/emollama-3.1-8B-r-128/
Original model description:
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** zisisbatzos
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/mlfoundations-dev_-_hp_ablations_qwen_lr5e-6-4bits
|
RichardErkhov
| 2025-06-07T20:36:11Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:33:35Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
hp_ablations_qwen_lr5e-6 - bnb 4bits
- Model creator: https://huggingface.co/mlfoundations-dev/
- Original model: https://huggingface.co/mlfoundations-dev/hp_ablations_qwen_lr5e-6/
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: hp_ablations_qwen_lr5e-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hp_ablations_qwen_lr5e-6
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the mlfoundations-dev/oh-dcft-v3.1-gpt-4o-mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1738
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6345 | 0.9983 | 438 | 0.6252 |
| 0.5962 | 1.9994 | 877 | 0.6187 |
| 0.575 | 2.9960 | 1314 | 0.6186 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
s1212122/realism-benchmark-gemma
|
s1212122
| 2025-06-07T20:35:37Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:30:14Z
|
---
base_model: unsloth/gemma-7b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** s1212122
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
coralieb7/mcqa_sft_focus_100k_neftune
|
coralieb7
| 2025-06-07T20:35:32Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:33:59Z
|
---
base_model: Qwen/Qwen3-0.6B-Base
library_name: transformers
model_name: mcqa_sft_focus_100k_neftune
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mcqa_sft_focus_100k_neftune
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="coralieb7/mcqa_sft_focus_100k_neftune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RichardErkhov/emozilla_-_smol-7b-init-4bits
|
RichardErkhov
| 2025-06-07T20:34:58Z
| 0
| 0
| null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:33:07Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
smol-7b-init - bnb 4bits
- Model creator: https://huggingface.co/emozilla/
- Original model: https://huggingface.co/emozilla/smol-7b-init/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/zedlika-ACT_BBOX-DiceFlip-eaxt2
|
phospho-app
| 2025-06-07T20:34:51Z
| 0
| 0
| null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-07T20:22:39Z
|
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Caught KeyError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 349, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
return self.collate_fn(data)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 398, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 171, in collate
{
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 173, in <dictcomp>
[d[key] for d in batch], collate_fn_map=collate_fn_map
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py", line 173, in <listcomp>
[d[key] for d in batch], collate_fn_map=collate_fn_map
~^^^^^
KeyError: 'observation.environment_state'
```
## Training parameters:
- **Dataset**: [zedlika/DiceFlip](https://huggingface.co/datasets/zedlika/DiceFlip)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
prshanthreddy/Taxi-v3
|
prshanthreddy
| 2025-06-07T20:34:46Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-07T20:34:43Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="prshanthreddy/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RichardErkhov/homeb82784_-_qwen2.5-7b-instruct-v1.2.0-4bits
|
RichardErkhov
| 2025-06-07T20:31:06Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:28:39Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5-7b-instruct-v1.2.0 - bnb 4bits
- Model creator: https://huggingface.co/homeb82784/
- Original model: https://huggingface.co/homeb82784/qwen2.5-7b-instruct-v1.2.0/
Original model description:
---
base_model: homeb82784/qwen2.5-7b-instruct-v1.2
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** homeb82784
- **License:** apache-2.0
- **Finetuned from model :** homeb82784/qwen2.5-7b-instruct-v1.2
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
prshanthreddy/q-FrozenLake-v1-4x4-noSlippery
|
prshanthreddy
| 2025-06-07T20:31:01Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-07T20:30:50Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="prshanthreddy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Azie88/Coachella_sentiment_analysis_roberta
|
Azie88
| 2025-06-07T20:30:09Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T00:02:36Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/ChoHJ_-_Llama-3-Open-Ko-8B-Instruct-Test-V5-4bits
|
RichardErkhov
| 2025-06-07T20:29:52Z
| 0
| 0
| null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:27:23Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-Open-Ko-8B-Instruct-Test-V5 - bnb 4bits
- Model creator: https://huggingface.co/ChoHJ/
- Original model: https://huggingface.co/ChoHJ/Llama-3-Open-Ko-8B-Instruct-Test-V5/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b
---
# Uploaded model
- **Developed by:** ChoHJ
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Q-PING_-_krx_Qwen_2.5_7B_it_1128_CPU_DPO-4bits
|
RichardErkhov
| 2025-06-07T20:29:20Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:27:25Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
krx_Qwen_2.5_7B_it_1128_CPU_DPO - bnb 4bits
- Model creator: https://huggingface.co/Q-PING/
- Original model: https://huggingface.co/Q-PING/krx_Qwen_2.5_7B_it_1128_CPU_DPO/
Original model description:
---
library_name: transformers
tags:
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/kevin009_-_minirewrite-4bits
|
RichardErkhov
| 2025-06-07T20:29:06Z
| 0
| 0
| null |
[
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:27:39Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
minirewrite - bnb 4bits
- Model creator: https://huggingface.co/kevin009/
- Original model: https://huggingface.co/kevin009/minirewrite/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- mistral
- trl
---
# Model Card: Minimalist Assistant
## Model Details
- **Base Model**: Mistral Instruct v2
- **Tokenizer**: based on Mistral Instruction following
## Intended Use
- As Editor Assistant for revision and paraphrasing
- Avoids technical jargon in favor of clear and accessible language
## Training Data
- **Initial Training**: 14,000 conversations in minimalist style and more accessible language
- Dataset: kevin009/system-defined-sft-llama3-14k
- **Further Training**: 8,000 revision conversations to enhance rewriting and paraphrasing tasks.
## Performance and Limitations
- **Limitations**:
- May produce shorter outputs compared to original version.
- Potential biases
## Ethical Considerations
- Designed for daily use, potential biases from training data should be considered
- The model does not have implemented safety measures to prevent generation of potentially harmful or offensive content
## Additional Information
- Fine-tuned to address limitations in writing tasks observed in other models
- Personalized for everyday use cases
- Motivation for development was to create a model better suited for writing tasks, as existing models were found lacking in this area
- SFT fine-tuned model
|
RichardErkhov/BarraHome_-_llama-3-orpo-v1-merged_16bit-4bits
|
RichardErkhov
| 2025-06-07T20:28:48Z
| 0
| 0
| null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:27:00Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3-orpo-v1-merged_16bit - bnb 4bits
- Model creator: https://huggingface.co/BarraHome/
- Original model: https://huggingface.co/BarraHome/llama-3-orpo-v1-merged_16bit/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- 32k
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
pipeline_tag: text-generation
---
# Uploaded model
- **Developed by:** BarraHome
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
anishreddy91/quantized-04-06-2025-gemma-2-9b-it-25.962k-15epochs
|
anishreddy91
| 2025-06-07T20:28:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-07T20:27:40Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nodesuman/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_mottled_gibbon
|
Nodesuman
| 2025-06-07T20:27:59Z
| 38
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am burrowing mottled gibbon",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-16T18:36:47Z
|
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_mottled_gibbon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am burrowing mottled gibbon
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_mottled_gibbon
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nodesuman/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_mottled_gibbon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RichardErkhov/mpasila_-_Finnish-Alpaca-Small-7B-4bits
|
RichardErkhov
| 2025-06-07T20:27:34Z
| 0
| 0
| null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-07T20:25:52Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Finnish-Alpaca-Small-7B - bnb 4bits
- Model creator: https://huggingface.co/mpasila/
- Original model: https://huggingface.co/mpasila/Finnish-Alpaca-Small-7B/
Original model description:
---
language:
- fi
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: LumiOpen/Viking-7B
datasets:
- mpasila/Finnish-Alpaca-Small
---
This is a merge of [mpasila/Finnish-Alpaca-Small-LoRA-7B](https://huggingface.co/mpasila/Finnish-Alpaca-Small-LoRA-7B).
LoRA trained in 4-bit with 2k context using [LumiOpen/Viking-7B](https://huggingface.co/LumiOpen/Viking-7B/) as the base model for 1 epoch.
Dataset used is [mpasila/Finnish-Alpaca-Small](https://huggingface.co/datasets/mpasila/Finnish-Alpaca-Small).
Re-trained because I have no idea if I used the fully trained model or the partially trained model (of Viking-7B), since it apparently was just released. (After re-training the score lowered noticeably so I wonder if I screwed up something.)
### Prompt format: Alpaca
It uses Alpaca format but with a translated instruction at the start:
```
{
"instruction,output": "Alla on ohje, jossa kuvataan tehtävä. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Response:\n%output%",
"instruction,input,output": "Alla on ohje, jossa kuvataan tehtävä ja joka on yhdistetty kontekstia lisäävään syötteeseen. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Input:\n%input%\n\n### Response:\n%output%"
}
```
## Evaluation
| Model | Size | Type | FIN-bench (score) |
|-------|------|------|-------|
| **mpasila/Finnish-Alpaca-Small-7B** | 7B | Instruct | 0.3586 |
| [mpasila/Finnish-Alpaca-Tiny-V2-7B](https://huggingface.co/mpasila/Finnish-Alpaca-Tiny-V2-7B) | 7B | Instruct | **0.4654** |
| [mpasila/Alpacazord-Viking-7B](https://huggingface.co/mpasila/Alpacazord-Viking-7B) | 7B | Instruct | 0.4123 |
| [mpasila/NordicAlpaca-Finnish-V1-7B](https://huggingface.co/mpasila/NordicAlpaca-Finnish-V1-7B) | 7B | Instruct | 0.3891 |
| [mpasila/Finnish-Viking-Alpaca-V1-7B](https://huggingface.co/mpasila/Finnish-Viking-Alpaca-V1-7B) | 7B | Instruct | 0.3943 |
| [Finnish-NLP/llama-7b-finnish-instruct-v0.1](https://huggingface.co/Finnish-NLP/llama-7b-finnish-instruct-v0.1) | 7B | Instruct | 0.4365 |
| [Finnish-NLP/llama-7b-finnish-instruct-v0.2](https://huggingface.co/Finnish-NLP/llama-7b-finnish-instruct-v0.2) | 7B | Instruct | 0.3993 |
| [Finnish-NLP/llama-7b-finnish](https://huggingface.co/Finnish-NLP/llama-7b-finnish) | 7B | Base | 0.2350 |
| [LumiOpen/Viking-7B (1000B)](https://huggingface.co/LumiOpen/Viking-7B) | 7B | Base | 0.3721 |
| [HPLT/gpt-7b-nordic-prerelease](https://huggingface.co/HPLT/gpt-7b-nordic-prerelease) | 7B | Base | 0.3169 |
[Source](https://docs.google.com/spreadsheets/d/1rqJb9dQVihg-Z1_Ras1L_-wuzPg9xNzpdmM2x5HueeY/edit?usp=sharing)
#### FIN-bench scores:
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_analogies | 0|multiple_choice_grade|0.5923|± |0.0433|
|bigbench_arithmetic_1_digit_addition | 0|multiple_choice_grade|0.2700|± |0.0446|
|bigbench_arithmetic_1_digit_division | 0|multiple_choice_grade|0.4783|± |0.1065|
|bigbench_arithmetic_1_digit_multiplication | 0|multiple_choice_grade|0.2600|± |0.0441|
|bigbench_arithmetic_1_digit_subtraction | 0|multiple_choice_grade|0.2200|± |0.0416|
|bigbench_arithmetic_2_digit_addition | 0|multiple_choice_grade|0.1700|± |0.0378|
|bigbench_arithmetic_2_digit_division | 0|multiple_choice_grade|0.3600|± |0.0482|
|bigbench_arithmetic_2_digit_multiplication | 0|multiple_choice_grade|0.2000|± |0.0402|
|bigbench_arithmetic_2_digit_subtraction | 0|multiple_choice_grade|0.1300|± |0.0338|
|bigbench_arithmetic_3_digit_addition | 0|multiple_choice_grade|0.3100|± |0.0465|
|bigbench_arithmetic_3_digit_division | 0|multiple_choice_grade|0.2100|± |0.0409|
|bigbench_arithmetic_3_digit_multiplication | 0|multiple_choice_grade|0.1600|± |0.0368|
|bigbench_arithmetic_3_digit_subtraction | 0|multiple_choice_grade|0.2300|± |0.0423|
|bigbench_arithmetic_4_digit_addition | 0|multiple_choice_grade|0.3900|± |0.0490|
|bigbench_arithmetic_4_digit_division | 0|multiple_choice_grade|0.2300|± |0.0423|
|bigbench_arithmetic_4_digit_multiplication | 0|multiple_choice_grade|0.2100|± |0.0409|
|bigbench_arithmetic_4_digit_subtraction | 0|multiple_choice_grade|0.4500|± |0.0500|
|bigbench_arithmetic_5_digit_addition | 0|multiple_choice_grade|0.4800|± |0.0502|
|bigbench_arithmetic_5_digit_division | 0|multiple_choice_grade|0.0700|± |0.0256|
|bigbench_arithmetic_5_digit_multiplication | 0|multiple_choice_grade|0.1700|± |0.0378|
|bigbench_arithmetic_5_digit_subtraction | 0|multiple_choice_grade|0.5800|± |0.0496|
|bigbench_cause_and_effect_one_sentence | 0|multiple_choice_grade|0.6275|± |0.0684|
|bigbench_cause_and_effect_one_sentence_no_prompt| 0|multiple_choice_grade|0.6667|± |0.0667|
|bigbench_cause_and_effect_two_sentences | 0|multiple_choice_grade|0.5098|± |0.0707|
|bigbench_emotions | 0|multiple_choice_grade|0.3312|± |0.0373|
|bigbench_empirical_judgments | 0|multiple_choice_grade|0.3333|± |0.0476|
|bigbench_general_knowledge | 0|multiple_choice_grade|0.2857|± |0.0544|
|bigbench_hhh_alignment_harmless | 0|multiple_choice_grade|0.3793|± |0.0643|
|bigbench_hhh_alignment_helpful | 0|multiple_choice_grade|0.3559|± |0.0629|
|bigbench_hhh_alignment_honest | 0|multiple_choice_grade|0.3559|± |0.0629|
|bigbench_hhh_alignment_other | 0|multiple_choice_grade|0.5349|± |0.0770|
|bigbench_intent_recognition | 0|multiple_choice_grade|0.1546|± |0.0138|
|bigbench_misconceptions | 0|multiple_choice_grade|0.5448|± |0.0432|
|bigbench_paraphrase | 0|multiple_choice_grade|0.5300|± |0.0354|
|bigbench_sentence_ambiguity | 0|multiple_choice_grade|0.4333|± |0.0645|
|bigbench_similarities_abstraction | 0|multiple_choice_grade|0.6974|± |0.0530|
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** LumiOpen/Viking-7B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zuazo/whisper-medium-eu-cv21.0
|
zuazo
| 2025-06-07T20:25:22Z
| 22
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"eu",
"dataset:common_voice_21_0_eu",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-05T21:00:14Z
|
---
library_name: transformers
language:
- eu
license: apache-2.0
base_model: openai/whisper-medium
tags:
- whisper-event
- generated_from_trainer
datasets:
- common_voice_21_0_eu
metrics:
- wer
model-index:
- name: Whisper Medium Basque
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_21_0_eu
type: common_voice_21_0_eu
config: default
split: test
args: default
metrics:
- name: Wer
type: wer
value: 8.378851722762663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Basque
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_21_0_eu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3958
- Wer: 8.3789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 100000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:--------:|:------:|:---------------:|:-------:|
| 0.0102 | 11.1111 | 5000 | 0.2346 | 10.0851 |
| 0.0045 | 22.2222 | 10000 | 0.2662 | 10.2880 |
| 0.0035 | 33.3333 | 15000 | 0.2865 | 10.0383 |
| 0.0046 | 44.4444 | 20000 | 0.2913 | 9.9889 |
| 0.0018 | 55.5556 | 25000 | 0.3080 | 9.8797 |
| 0.0016 | 66.6667 | 30000 | 0.3096 | 9.8380 |
| 0.0031 | 77.7778 | 35000 | 0.3158 | 9.9612 |
| 0.0018 | 88.8889 | 40000 | 0.3317 | 10.2646 |
| 0.001 | 100.0 | 45000 | 0.3321 | 10.1380 |
| 0.0003 | 111.1111 | 50000 | 0.3275 | 9.7904 |
| 0.0007 | 122.2222 | 55000 | 0.3265 | 10.0401 |
| 0.0 | 133.3333 | 60000 | 0.3307 | 9.5641 |
| 0.0 | 144.4444 | 65000 | 0.3337 | 9.7461 |
| 0.0 | 155.5556 | 70000 | 0.3444 | 9.6820 |
| 0.0002 | 166.6667 | 75000 | 0.3503 | 9.8346 |
| 0.0 | 177.7778 | 80000 | 0.3586 | 9.1531 |
| 0.0 | 188.8889 | 85000 | 0.3744 | 8.7881 |
| 0.0 | 200.0 | 90000 | 0.3871 | 8.5323 |
| 0.0 | 211.1111 | 95000 | 0.3938 | 8.4040 |
| 0.0 | 222.2222 | 100000 | 0.3958 | 8.3789 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
christinakopi/MNLP_M3_dpo_model_m1_pairs_lre3e-6_sft_BASE_mina
|
christinakopi
| 2025-06-07T20:22:17Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:21:09Z
|
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
raphassaraf/MNLP_M3_rag_model
|
raphassaraf
| 2025-06-07T20:19:25Z
| 36
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-30T13:39:07Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Blinorot/MNLP_M3_DPO_V8
|
Blinorot
| 2025-06-07T20:16:47Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:16:09Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seregadgl/qwenmod1
|
seregadgl
| 2025-06-07T20:15:16Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:405885",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-07T20:13:13Z
|
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:405885
- loss:CosineSimilarityLoss
base_model: Qwen/Qwen3-Embedding-0.6B
widget:
- source_sentence: 'чехол-накладка smarterra colorflow iphone 8 7 синий-желтый '
sentences:
- переносной экран abc ea-909d id135139
- триммер для бороды philips bt 3206/14
- накладной чехол смартера colorflow для iphone 8 7 blue-yellow
- source_sentence: 'салфетки влажные освежающие amra ароматом 15шт '
sentences:
- сумки и чехлы для фотоаппаратов
- регулируемый по высоте стол уэллдеск каркас 9032533 столешница 9031918
- влажные освежающие салфетки амра с ароматом 15 шт
- source_sentence: 'самоклеющаяся бумага а4 для этикеток этикеток 70 32 '
sentences:
- самоклеящаяся бумага a4 для стикеров 80 45
- фильтр керхер 2 642-794 0
- патриот 807117000 dl 1204 спиннинговая катушка
- source_sentence: 'торговая палатка sundays party 3x6 белый зеленый '
sentences:
- палатка для мероприятий fun tent 3x4 жёлтый серый
- робот-пылесос роборок q7 max black
- жк телевизор samsung 50f820ts
- source_sentence: 'геймпад canyon cnd-gpw5 '
sentences:
- ваза 136 312 50 16 5 16 5 см
- игровая панель steelseries 610
- карта для нарезки fimo
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: val eval
type: val-eval
metrics:
- type: pearson_cosine
value: 0.8800094118757594
name: Pearson Cosine
- type: spearman_cosine
value: 0.8081536139439484
name: Spearman Cosine
---
# SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) <!-- at revision 744169034862c8eec56628663995004342e4e449 -->
- **Maximum Sequence Length:** 64 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: Qwen3Model
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seregadgl/qwenmod1")
# Run inference
sentences = [
'геймпад canyon cnd-gpw5 ',
'игровая панель steelseries 610',
'ваза 136 312 50 16 5 16 5 см',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `val-eval`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.88 |
| **spearman_cosine** | **0.8082** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 405,885 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 19.59 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.7 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.4</li><li>max: 0.99</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------|
| <code>корпусной пассивный сабвуфер hertz dbx 25 3</code> | <code>сабвуфер корпусного типа с пассивным принципом работы хертц dbx 25 3</code> | <code>0.9777926802635193</code> |
| <code>энергосберегающая лампа gauss 222145</code> | <code>лампа которая экономит электроэнергию гаусс 222145</code> | <code>0.9808560013771057</code> |
| <code>call of duty black ops 2 nd цифровая версия </code> | <code>call of duty: advanced warfare nd цифровая версия</code> | <code>0.06349477171897888</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `num_train_epochs`: 10
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | val-eval_spearman_cosine |
|:------:|:----:|:-------------:|:------------------------:|
| 0.0099 | 500 | 0.353 | - |
| 0.0197 | 1000 | 0.1551 | 0.5802 |
| 0.0296 | 1500 | 0.1092 | - |
| 0.0394 | 2000 | 0.0876 | 0.7306 |
| 0.0493 | 2500 | 0.0751 | - |
| 0.0591 | 3000 | 0.0604 | 0.7770 |
| 0.0690 | 3500 | 0.0567 | - |
| 0.0788 | 4000 | 0.0506 | 0.7959 |
| 0.0887 | 4500 | 0.0461 | - |
| 0.0985 | 5000 | 0.0445 | 0.8082 |
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
NastasiaM/mbert-with-LT-finetuned-squad-NEW-nofrozen
|
NastasiaM
| 2025-06-07T20:13:09Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-06-07T19:35:36Z
|
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: mbert-with-LT-finetuned-squad-NEW-nofrozen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-with-LT-finetuned-squad-NEW-nofrozen
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sadicanustun/qwen3_q4_k_m
|
sadicanustun
| 2025-06-07T20:10:50Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-07T20:08:01Z
|
---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sadicanustun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bosonphoton/Qwen2-0.5B-GRPO-test
|
bosonphoton
| 2025-06-07T20:09:35Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-07T17:20:32Z
|
---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bosonphoton/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
vidyc/direct_dpo_tak_stak
|
vidyc
| 2025-06-07T20:06:50Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:06:02Z
|
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jbreuch/ultrafeedback-persuasive-model-alt
|
jbreuch
| 2025-06-07T20:05:16Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:04:31Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmbmm0sk9015nekg0cb8l6j3z_cmbmmc1n60160ekg0ms0vu9n5
|
BootesVoid
| 2025-06-07T20:05:08Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-07T20:05:07Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TAYLA
---
# Cmbmm0Sk9015Nekg0Cb8L6J3Z_Cmbmmc1N60160Ekg0Ms0Vu9N5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TAYLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TAYLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbmm0sk9015nekg0cb8l6j3z_cmbmmc1n60160ekg0ms0vu9n5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbmm0sk9015nekg0cb8l6j3z_cmbmmc1n60160ekg0ms0vu9n5', weight_name='lora.safetensors')
image = pipeline('TAYLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbmm0sk9015nekg0cb8l6j3z_cmbmmc1n60160ekg0ms0vu9n5/discussions) to add images that show off what you’ve made with this LoRA.
|
timarni/qwen3_pretraining_full_2_1200
|
timarni
| 2025-06-07T20:03:42Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T20:02:43Z
|
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: outputs/qwen3_pretraining_full_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
######################################
# CONTINUED PRE-TRAINING EXAMPLE #
######################################
base_model: Qwen/Qwen3-0.6B-Base
strict: false
# ––– PRE-TRAIN DATA –––
pretraining_dataset:
- path: timarni/pretrain-textbooks
type: completion
- path: timarni/pretrain-wikipedia
type: completion
shuffle_merged_datasets: true
chat_template: null
# ––– SEQ LEN & PACKING –––
sequence_len: 4096
sample_packing: true
# eval_sample_packing: true # false
pad_to_sequence_len: true
# eval_pad_to_max_length: false
# ––– TRAINING BUDGET –––
micro_batch_size: 4
gradient_accumulation_steps: 4
max_steps: 1500
# ––– OPTIMISER –––
learning_rate: 5e-6
lr_scheduler: cosine
warmup_steps: 400
weight_decay: 0.01
optimizer: adamw_torch
# ––– PRECISION / SPEED –––
bf16: auto
tf32: true
flash_attention: true
gradient_checkpointing: true
# # ––– EVALUATION –––
# do_bench_eval: false # we handle eval via test_datasets
# test_datasets: # ← plural!
# - path: ./datasets/mmlu_val_all.jsonl # <— your converted file
# ds_type: json
# split: train # the default split Hugging Face gives local JSONL
# type: explainchoice # mmlu_mcqa # explainchoice
# field_question: question # these three lines are defaults, but
# field_choices: choices # you can leave them out if you matched the keys
# field_solution: solution
# # eval_batch_size: 1
# eval_steps: 500
# metric_for_best_model: accuracy # expose “accuracy” coming from explainchoice
# greater_is_better: true
# eval_strategy:
# ––– OUTPUT / LOGGING –––
save_steps: 150
save_total_limit: 15
output_dir: ./outputs/qwen3_pretraining_full_2
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_name: qwen3-0.6B-pretraining_full_2
```
</details><br>
# outputs/qwen3_pretraining_full_2
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 400
- training_steps: 1500
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Gracebobs/Gracebobs
|
Gracebobs
| 2025-06-07T20:01:03Z
| 0
| 0
| null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-06-07T20:01:02Z
|
---
license: artistic-2.0
---
|
vssabarinath/cat-or-dog
|
vssabarinath
| 2025-06-07T20:00:57Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-07T20:00:07Z
|
---
title: Cat Or Dog
emoji: 😻
colorFrom: yellow
colorTo: yellow
sdk: gradio
sdk_version: 5.32.1
app_file: app.py
pinned: false
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
apurvaga/nnetnav-wa-qwen-7B
|
apurvaga
| 2025-06-07T19:59:31Z
| 2
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:2506.03533",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-03T22:32:10Z
|
---
library_name: transformers
tags: []
---
Details and example usage scripts for this model can be found in our repo (https://github.com/ApGa/Go-Browse) and paper (https://www.arxiv.org/abs/2506.03533).
|
DoniaGasmii/MNLP_M3_qwen_base_dpo_beta_0_5
|
DoniaGasmii
| 2025-06-07T19:59:24Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T05:06:01Z
|
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CeciGonSer/translation_pu_es_sintetico_mbart_1ep
|
CeciGonSer
| 2025-06-07T19:58:46Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-07T19:54:46Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sarcasmatican/goforrm
|
sarcasmatican
| 2025-06-07T19:57:11Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-07T19:23:16Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rohit
---
# Goforrm
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rohit` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "rohit",
"lora_weights": "https://huggingface.co/sarcasmatican/goforrm/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('sarcasmatican/goforrm', weight_name='lora.safetensors')
image = pipeline('rohit').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/sarcasmatican/goforrm/discussions) to add images that show off what you’ve made with this LoRA.
|
7gonzalodm/team-classifier
|
7gonzalodm
| 2025-06-07T19:56:12Z
| 0
| 0
| null |
[
"football",
"teams",
"image-classification",
"en",
"base_model:google/siglip-base-patch16-224",
"base_model:finetune:google/siglip-base-patch16-224",
"license:mit",
"region:us"
] |
image-classification
| 2025-06-07T19:28:30Z
|
---
license: mit
language:
- en
metrics:
- accuracy
base_model:
- google/siglip-base-patch16-224
pipeline_tag: image-classification
tags:
- football
- teams
---
|
hed0h/qwen25-book-correction-gguf
|
hed0h
| 2025-06-07T19:55:51Z
| 0
| 0
| null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-06-07T19:54:54Z
|
# Qwen2.5-0.5B Book Correction - GGUF Models
Fine-tuned Qwen2.5-0.5B model for book title correction.
## Files:
- `qwen-book-correction-q4_0.gguf` (336MB) - 4-bit quantized, perfect for mobile/chromebooks
- `qwen-book-correction-q8_0.gguf` (507MB) - 8-bit quantized, excellent quality
## Usage:
Compatible with llama.cpp, Ollama, LM Studio, and other GGUF-compatible tools.
## Original Model:
Based on `unsloth/Qwen2.5-0.5B-Instruct`
|
fede-m/FGSDI_final_xlm_baseline_4
|
fede-m
| 2025-06-07T19:55:33Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-07T18:40:07Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nofunstudio/falportrait
|
nofunstudio
| 2025-06-07T19:55:16Z
| 1
| 0
|
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-12T20:36:09Z
|
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: undefined
instance_prompt: JIMMY
license: other
---
# falportrait
<Gallery />
## Model description
## Trigger words
You should use `JIMMY` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/nofunstudio/falportrait/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-portrait-trainer](https://fal.ai/models/fal-ai/flux-lora-portrait-trainer).
|
autonomousvision/navsim_baselines
|
autonomousvision
| 2025-06-07T19:54:48Z
| 0
| 4
| null |
[
"robotics",
"arxiv:2406.15349",
"license:apache-2.0",
"region:us"
] |
robotics
| 2024-09-16T08:50:23Z
|
---
license: apache-2.0
pipeline_tag: robotics
---
<div id="top" align="center">
<p align="center">
<img src="https://raw.githubusercontent.com/autonomousvision/navsim/main/assets/navsim_transparent.png" width="400">
<h2 align="center">Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking</h1>
<h3 align="center"><a href="https://arxiv.org/abs/2406.15349">Paper</a> | <a href="https://github.com/autonomousvision/navsim\">GitHub</a> | <a href="https://www.youtube.com/watch?v=Qe76HRmPDe0\">Talk</a> | <a href="https://huggingface.co/spaces/AGC2024-P/e2e-driving-navsim\">Leaderboard</a> </h3>
</p>
Official model checkpoints for TransFuser, Latent TransFuser (LTF), and the EgoStatusMLP. The checkpoints were used to populate the [leaderboard](https://huggingface.co/spaces/AGC2024-P/e2e-driving-navsim) with 3 training seeds per model.
Please visit the [NAVSIM GitHub repository](https://github.com/autonomousvision/navsim) for further information.
|
Tiffany-Wisconsin/Tiffany.Wisconsin.video.leaked.Tiffany.Wisconsin.Blames.Divorce.on.Video.With.20.Men
|
Tiffany-Wisconsin
| 2025-06-07T19:53:16Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-06-07T19:51:16Z
|
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Tiffany-Wisconsin)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Tiffany-Wisconsin)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Tiffany-Wisconsin)
|
eylulipci/30_dpo_ds30_lr1e-05_acc16_ep4_beta0.1-epoch1
|
eylulipci
| 2025-06-07T19:52:33Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T19:50:53Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
g-assismoraes/gemma-4b-it-imdb
|
g-assismoraes
| 2025-06-07T19:49:41Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-07T17:57:50Z
|
---
library_name: transformers
license: gemma
base_model: google/gemma-3-4b-it
tags:
- generated_from_trainer
model-index:
- name: gemma-4b-it-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-4b-it-imdb
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.791 | 1.0 | 6250 | 1.8516 |
| 1.574 | 2.0 | 12500 | 1.8734 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
graliuce/Qwen2.5-3B-Instruct_MedMCQA.21.01
|
graliuce
| 2025-06-07T19:49:37Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:graliuce/MedMCQA.21.01",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T18:11:19Z
|
---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: graliuce/MedMCQA.21.01
library_name: transformers
model_name: Qwen2.5-3B-Instruct_MedMCQA.21.01
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-3B-Instruct_MedMCQA.21.01
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [graliuce/MedMCQA.21.01](https://huggingface.co/datasets/graliuce/MedMCQA.21.01) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="graliuce/Qwen2.5-3B-Instruct_MedMCQA.21.01", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/grace_rl/infoseek/runs/cyrus8yp)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
eylulipci/30_dpo_ds30_lr1e-06_acc16_ep4_beta0.1-epoch1
|
eylulipci
| 2025-06-07T19:49:26Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T19:46:44Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gbennani/MNLP_M2_RAG_model_qwen_bis_50k
|
gbennani
| 2025-06-07T19:47:05Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T19:46:35Z
|
---
base_model: Qwen/Qwen3-0.6B-Base
library_name: transformers
model_name: MNLP_M2_RAG_model_qwen_bis_50k
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MNLP_M2_RAG_model_qwen_bis_50k
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gbennani/MNLP_M2_RAG_model_qwen_bis_50k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
seema-haider-ka/18.VIDEO.seema.haider.ka.video.seema.haider.hot.seema.haider.ki.video
|
seema-haider-ka
| 2025-06-07T19:45:53Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-06-07T19:43:47Z
|
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=seema-haider-ka)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=seema-haider-ka)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=seema-haider-ka)
|
eylulipci/30_dpo_ds30_lr1e-06_acc16_ep4_beta0.2-epoch1
|
eylulipci
| 2025-06-07T19:45:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T19:42:38Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YuchenLi01/generatedSoftQwen2.5MathRM72Bth0.5pair4NoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr1e-07_beta0.9_42
|
YuchenLi01
| 2025-06-07T19:44:35Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathRM72B_th0.5_pair4NoGT",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T18:08:38Z
|
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathRM72B_th0.5_pair4NoGT
model-index:
- name: generatedSoftQwen2.5MathRM72Bth0.5pair4NoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr1e-07_beta0.9_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# generatedSoftQwen2.5MathRM72Bth0.5pair4NoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr1e-07_beta0.9_42
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathRM72B_th0.5_pair4NoGT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6585
- Rewards/chosen: -0.1054
- Rewards/rejected: -0.2870
- Rewards/accuracies: 0.6159
- Rewards/margins: 0.1816
- Logps/rejected: -47.8046
- Logps/chosen: -37.4223
- Logits/rejected: -2.1595
- Logits/chosen: -2.3028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6753 | 0.0287 | 20 | 0.7224 | 0.0167 | -0.0118 | 0.5549 | 0.0285 | -47.4988 | -37.2866 | -2.1640 | -2.3048 |
| 0.7087 | 0.0573 | 40 | 0.7144 | -0.0326 | -0.0332 | 0.4878 | 0.0006 | -47.5225 | -37.3414 | -2.1649 | -2.3058 |
| 0.7332 | 0.0860 | 60 | 0.7159 | -0.0398 | -0.0125 | 0.4695 | -0.0273 | -47.4995 | -37.3494 | -2.1657 | -2.3063 |
| 0.6873 | 0.1146 | 80 | 0.7101 | -0.0197 | -0.0370 | 0.5244 | 0.0172 | -47.5267 | -37.3271 | -2.1654 | -2.3061 |
| 0.6902 | 0.1433 | 100 | 0.7010 | -0.0068 | -0.0061 | 0.4817 | -0.0007 | -47.4924 | -37.3128 | -2.1657 | -2.3061 |
| 0.6744 | 0.1719 | 120 | 0.7095 | -0.0160 | -0.0304 | 0.5183 | 0.0144 | -47.5195 | -37.3230 | -2.1595 | -2.2993 |
| 0.7775 | 0.2006 | 140 | 0.7024 | 0.0184 | -0.0632 | 0.5854 | 0.0817 | -47.5559 | -37.2847 | -2.1626 | -2.3026 |
| 0.6785 | 0.2292 | 160 | 0.7022 | -0.0076 | -0.0628 | 0.6098 | 0.0552 | -47.5555 | -37.3137 | -2.1672 | -2.3084 |
| 0.7216 | 0.2579 | 180 | 0.6993 | -0.0048 | -0.0503 | 0.5366 | 0.0455 | -47.5416 | -37.3106 | -2.1661 | -2.3069 |
| 0.6791 | 0.2865 | 200 | 0.6975 | -0.0024 | -0.0326 | 0.5488 | 0.0302 | -47.5219 | -37.3078 | -2.1716 | -2.3137 |
| 0.7469 | 0.3152 | 220 | 0.6902 | 0.0024 | -0.0493 | 0.5244 | 0.0518 | -47.5405 | -37.3025 | -2.1647 | -2.3057 |
| 0.7663 | 0.3438 | 240 | 0.6940 | -0.0133 | -0.0739 | 0.5366 | 0.0606 | -47.5679 | -37.3200 | -2.1629 | -2.3044 |
| 0.7391 | 0.3725 | 260 | 0.6876 | -0.0453 | -0.0940 | 0.5549 | 0.0487 | -47.5901 | -37.3555 | -2.1601 | -2.3012 |
| 0.7087 | 0.4011 | 280 | 0.6867 | -0.0181 | -0.1061 | 0.5732 | 0.0880 | -47.6035 | -37.3253 | -2.1568 | -2.2975 |
| 0.6278 | 0.4298 | 300 | 0.6851 | -0.0785 | -0.1145 | 0.5732 | 0.0360 | -47.6129 | -37.3924 | -2.1580 | -2.2992 |
| 0.6686 | 0.4585 | 320 | 0.6774 | -0.0998 | -0.1811 | 0.5793 | 0.0813 | -47.6869 | -37.4161 | -2.1584 | -2.2999 |
| 0.6847 | 0.4871 | 340 | 0.6794 | -0.0856 | -0.1863 | 0.6098 | 0.1007 | -47.6927 | -37.4003 | -2.1616 | -2.3043 |
| 0.6087 | 0.5158 | 360 | 0.6842 | -0.0985 | -0.1984 | 0.5854 | 0.1000 | -47.7062 | -37.4146 | -2.1544 | -2.2964 |
| 0.7111 | 0.5444 | 380 | 0.6766 | -0.0900 | -0.2100 | 0.6220 | 0.1200 | -47.7190 | -37.4051 | -2.1535 | -2.2948 |
| 0.7064 | 0.5731 | 400 | 0.6786 | -0.0935 | -0.2192 | 0.6037 | 0.1257 | -47.7293 | -37.4091 | -2.1564 | -2.2983 |
| 0.7012 | 0.6017 | 420 | 0.6716 | -0.0944 | -0.2105 | 0.5305 | 0.1162 | -47.7196 | -37.4100 | -2.1614 | -2.3044 |
| 0.6687 | 0.6304 | 440 | 0.6637 | -0.1026 | -0.2058 | 0.5915 | 0.1033 | -47.7144 | -37.4191 | -2.1597 | -2.3027 |
| 0.6781 | 0.6590 | 460 | 0.6714 | -0.0944 | -0.2301 | 0.6098 | 0.1357 | -47.7414 | -37.4101 | -2.1591 | -2.3015 |
| 0.638 | 0.6877 | 480 | 0.6666 | -0.0969 | -0.2443 | 0.6585 | 0.1475 | -47.7572 | -37.4128 | -2.1513 | -2.2931 |
| 0.6373 | 0.7163 | 500 | 0.6681 | -0.1242 | -0.2144 | 0.5793 | 0.0902 | -47.7239 | -37.4432 | -2.1595 | -2.3023 |
| 0.6619 | 0.7450 | 520 | 0.6674 | -0.1092 | -0.1900 | 0.5610 | 0.0808 | -47.6968 | -37.4266 | -2.1591 | -2.3019 |
| 0.6416 | 0.7736 | 540 | 0.6615 | -0.0805 | -0.2083 | 0.6280 | 0.1278 | -47.7171 | -37.3946 | -2.1540 | -2.2960 |
| 0.7249 | 0.8023 | 560 | 0.6685 | -0.0840 | -0.2232 | 0.6098 | 0.1393 | -47.7337 | -37.3985 | -2.1646 | -2.3089 |
| 0.5748 | 0.8309 | 580 | 0.6580 | -0.0920 | -0.2867 | 0.6341 | 0.1947 | -47.8043 | -37.4074 | -2.1573 | -2.3007 |
| 0.688 | 0.8596 | 600 | 0.6655 | -0.1084 | -0.2640 | 0.6280 | 0.1556 | -47.7790 | -37.4256 | -2.1543 | -2.2971 |
| 0.6646 | 0.8883 | 620 | 0.6602 | -0.0985 | -0.2408 | 0.5854 | 0.1423 | -47.7533 | -37.4146 | -2.1551 | -2.2976 |
| 0.749 | 0.9169 | 640 | 0.6720 | -0.1231 | -0.2331 | 0.5488 | 0.1100 | -47.7447 | -37.4420 | -2.1614 | -2.3053 |
| 0.6976 | 0.9456 | 660 | 0.6632 | -0.0726 | -0.2180 | 0.5793 | 0.1455 | -47.7279 | -37.3858 | -2.1560 | -2.2988 |
| 0.6452 | 0.9742 | 680 | 0.6678 | -0.1067 | -0.2159 | 0.5915 | 0.1092 | -47.7256 | -37.4237 | -2.1592 | -2.3026 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.20.3
|
fede-m/FGSDI_final_xlm_baseline_3
|
fede-m
| 2025-06-07T19:44:33Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-07T18:25:05Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timarni/qwen3_pretraining_full_2_750
|
timarni
| 2025-06-07T19:44:29Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T19:43:17Z
|
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: outputs/qwen3_pretraining_full_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
######################################
# CONTINUED PRE-TRAINING EXAMPLE #
######################################
base_model: Qwen/Qwen3-0.6B-Base
strict: false
# ––– PRE-TRAIN DATA –––
pretraining_dataset:
- path: timarni/pretrain-textbooks
type: completion
- path: timarni/pretrain-wikipedia
type: completion
shuffle_merged_datasets: true
chat_template: null
# ––– SEQ LEN & PACKING –––
sequence_len: 4096
sample_packing: true
# eval_sample_packing: true # false
pad_to_sequence_len: true
# eval_pad_to_max_length: false
# ––– TRAINING BUDGET –––
micro_batch_size: 4
gradient_accumulation_steps: 4
max_steps: 1500
# ––– OPTIMISER –––
learning_rate: 5e-6
lr_scheduler: cosine
warmup_steps: 400
weight_decay: 0.01
optimizer: adamw_torch
# ––– PRECISION / SPEED –––
bf16: auto
tf32: true
flash_attention: true
gradient_checkpointing: true
# # ––– EVALUATION –––
# do_bench_eval: false # we handle eval via test_datasets
# test_datasets: # ← plural!
# - path: ./datasets/mmlu_val_all.jsonl # <— your converted file
# ds_type: json
# split: train # the default split Hugging Face gives local JSONL
# type: explainchoice # mmlu_mcqa # explainchoice
# field_question: question # these three lines are defaults, but
# field_choices: choices # you can leave them out if you matched the keys
# field_solution: solution
# # eval_batch_size: 1
# eval_steps: 500
# metric_for_best_model: accuracy # expose “accuracy” coming from explainchoice
# greater_is_better: true
# eval_strategy:
# ––– OUTPUT / LOGGING –––
save_steps: 150
save_total_limit: 15
output_dir: ./outputs/qwen3_pretraining_full_2
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_name: qwen3-0.6B-pretraining_full_2
```
</details><br>
# outputs/qwen3_pretraining_full_2
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 400
- training_steps: 1500
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
|
lakshitgupta/forgery_detection
|
lakshitgupta
| 2025-06-07T19:44:23Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-07T19:44:23Z
|
---
license: apache-2.0
---
|
SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2-Q4_K_S-GGUF
|
SuperbEmphasis
| 2025-06-07T19:44:12Z
| 0
| 0
| null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2",
"base_model:quantized:SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2",
"endpoints_compatible",
"region:us"
] | null | 2025-06-07T19:42:58Z
|
---
base_model: SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2
tags:
- llama-cpp
- gguf-my-repo
---
# SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2-Q4_K_S-GGUF
This model was converted to GGUF format from [`SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2`](https://huggingface.co/SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2-Q4_K_S-GGUF --hf-file black-eclipse-test-erp-rp-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2-Q4_K_S-GGUF --hf-file black-eclipse-test-erp-rp-v2-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2-Q4_K_S-GGUF --hf-file black-eclipse-test-erp-rp-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V2-Q4_K_S-GGUF --hf-file black-eclipse-test-erp-rp-v2-q4_k_s.gguf -c 2048
```
|
Somasish01/fine-tuned-ministral-8b-custom-data
|
Somasish01
| 2025-06-07T19:42:13Z
| 0
| 0
| null |
[
"safetensors",
"mistral",
"4-bit",
"region:us"
] | null | 2025-06-07T19:26:57Z
|
# Fine-tuned Ministral-8B for Medical Diagnosis
This model was fine-tuned on a custom medical diagnosis dataset using LoRA (Low-Rank Adaptation) with MLX.
## Model Description
- **Base Model:** mlx-community/Ministral-8B-Instruct-2410-4bit
- **Fine-tuning Method:** LoRA
- **Domain:** Medical diagnosis based on patient symptoms
- **Training Data:** Custom dataset of symptoms and medical diagnoses
- **Intended Use:** Assisting in preliminary medical diagnosis based on patient symptoms
## Usage Example
```python
from mlx_lm import generate, load
model, tokenizer = load("path_to_model")
prompt = "Symptoms: I have been experiencing memory loss, stiffness and difficulty walking. Question: What could be the diagnosis I have?"
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
print(response)
Limitations
This model is intended for educational purposes only and should not replace professional medical advice, diagnosis, or treatment.
Other Details
base_model: mlx-community/Ministral-8B-Instruct-2410-4bit
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
library_name: mlx
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
tags:
- mlx
inference: false
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
pipeline_tag: text-generation
---
|
timarni/qwen3_pretraining_full_2_450
|
timarni
| 2025-06-07T19:41:36Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T19:40:25Z
|
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: outputs/qwen3_pretraining_full_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
######################################
# CONTINUED PRE-TRAINING EXAMPLE #
######################################
base_model: Qwen/Qwen3-0.6B-Base
strict: false
# ––– PRE-TRAIN DATA –––
pretraining_dataset:
- path: timarni/pretrain-textbooks
type: completion
- path: timarni/pretrain-wikipedia
type: completion
shuffle_merged_datasets: true
chat_template: null
# ––– SEQ LEN & PACKING –––
sequence_len: 4096
sample_packing: true
# eval_sample_packing: true # false
pad_to_sequence_len: true
# eval_pad_to_max_length: false
# ––– TRAINING BUDGET –––
micro_batch_size: 4
gradient_accumulation_steps: 4
max_steps: 1500
# ––– OPTIMISER –––
learning_rate: 5e-6
lr_scheduler: cosine
warmup_steps: 400
weight_decay: 0.01
optimizer: adamw_torch
# ––– PRECISION / SPEED –––
bf16: auto
tf32: true
flash_attention: true
gradient_checkpointing: true
# # ––– EVALUATION –––
# do_bench_eval: false # we handle eval via test_datasets
# test_datasets: # ← plural!
# - path: ./datasets/mmlu_val_all.jsonl # <— your converted file
# ds_type: json
# split: train # the default split Hugging Face gives local JSONL
# type: explainchoice # mmlu_mcqa # explainchoice
# field_question: question # these three lines are defaults, but
# field_choices: choices # you can leave them out if you matched the keys
# field_solution: solution
# # eval_batch_size: 1
# eval_steps: 500
# metric_for_best_model: accuracy # expose “accuracy” coming from explainchoice
# greater_is_better: true
# eval_strategy:
# ––– OUTPUT / LOGGING –––
save_steps: 150
save_total_limit: 15
output_dir: ./outputs/qwen3_pretraining_full_2
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_name: qwen3-0.6B-pretraining_full_2
```
</details><br>
# outputs/qwen3_pretraining_full_2
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 400
- training_steps: 1500
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
|
BootesVoid/cmbgrxrz404wakfxs6g9frms7_cmbmm0cws015mekg0g4zr06ep
|
BootesVoid
| 2025-06-07T19:41:08Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-07T19:41:06Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DIOR
---
# Cmbgrxrz404Wakfxs6G9Frms7_Cmbmm0Cws015Mekg0G4Zr06Ep
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DIOR` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "DIOR",
"lora_weights": "https://huggingface.co/BootesVoid/cmbgrxrz404wakfxs6g9frms7_cmbmm0cws015mekg0g4zr06ep/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbgrxrz404wakfxs6g9frms7_cmbmm0cws015mekg0g4zr06ep', weight_name='lora.safetensors')
image = pipeline('DIOR').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbgrxrz404wakfxs6g9frms7_cmbmm0cws015mekg0g4zr06ep/discussions) to add images that show off what you’ve made with this LoRA.
|
timarni/qwen3_pretraining_full_2_300
|
timarni
| 2025-06-07T19:40:03Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T19:38:51Z
|
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: outputs/qwen3_pretraining_full_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
######################################
# CONTINUED PRE-TRAINING EXAMPLE #
######################################
base_model: Qwen/Qwen3-0.6B-Base
strict: false
# ––– PRE-TRAIN DATA –––
pretraining_dataset:
- path: timarni/pretrain-textbooks
type: completion
- path: timarni/pretrain-wikipedia
type: completion
shuffle_merged_datasets: true
chat_template: null
# ––– SEQ LEN & PACKING –––
sequence_len: 4096
sample_packing: true
# eval_sample_packing: true # false
pad_to_sequence_len: true
# eval_pad_to_max_length: false
# ––– TRAINING BUDGET –––
micro_batch_size: 4
gradient_accumulation_steps: 4
max_steps: 1500
# ––– OPTIMISER –––
learning_rate: 5e-6
lr_scheduler: cosine
warmup_steps: 400
weight_decay: 0.01
optimizer: adamw_torch
# ––– PRECISION / SPEED –––
bf16: auto
tf32: true
flash_attention: true
gradient_checkpointing: true
# # ––– EVALUATION –––
# do_bench_eval: false # we handle eval via test_datasets
# test_datasets: # ← plural!
# - path: ./datasets/mmlu_val_all.jsonl # <— your converted file
# ds_type: json
# split: train # the default split Hugging Face gives local JSONL
# type: explainchoice # mmlu_mcqa # explainchoice
# field_question: question # these three lines are defaults, but
# field_choices: choices # you can leave them out if you matched the keys
# field_solution: solution
# # eval_batch_size: 1
# eval_steps: 500
# metric_for_best_model: accuracy # expose “accuracy” coming from explainchoice
# greater_is_better: true
# eval_strategy:
# ––– OUTPUT / LOGGING –––
save_steps: 150
save_total_limit: 15
output_dir: ./outputs/qwen3_pretraining_full_2
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_name: qwen3-0.6B-pretraining_full_2
```
</details><br>
# outputs/qwen3_pretraining_full_2
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 400
- training_steps: 1500
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
|
ekwek/R1-8B-3bit-gptq-fp16
|
ekwek
| 2025-06-07T19:37:34Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-06-07T19:35:24Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bingham/qwen_deep_8b_cold_train_unsloth_model
|
Bingham
| 2025-06-07T19:37:32Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-02T02:44:21Z
|
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Bingham
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ekwek/R1-8B-3bit-autogptq
|
ekwek
| 2025-06-07T19:34:13Z
| 0
| 0
|
transformers
|
[
"transformers",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-06-07T19:29:31Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AqsaK/donut_1880_exp_subset
|
AqsaK
| 2025-06-07T19:33:43Z
| 29
| 0
| null |
[
"pytorch",
"vision-encoder-decoder",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-06-04T16:33:38Z
|
---
license: cc-by-sa-4.0
---
|
nbzy1995/Reinforce-Cartpole-v1
|
nbzy1995
| 2025-06-07T19:32:56Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-13T16:50:18Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dgramus/ner-best-model
|
dgramus
| 2025-06-07T19:30:33Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"NER",
"ecom",
"token-classification",
"ru",
"en",
"dataset:dgramus/synth-ecom-search-queries",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-07T18:34:16Z
|
---
datasets:
- dgramus/synth-ecom-search-queries
language:
- ru
- en
metrics:
- f1
base_model:
- FacebookAI/xlm-roberta-base
pipeline_tag: token-classification
library_name: transformers
tags:
- NER
- ecom
---
|
Stonewu777/dqn-SpaceInvadersNoFrameskip-v4
|
Stonewu777
| 2025-06-07T19:29:21Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-07T19:27:11Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 266.00 +/- 171.61
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Stonewu777 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Stonewu777 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Stonewu777
```
## Hyperparameters
```python
OrderedDict([('batch_size', 100),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Disya/DS-R1-Qwen3-8B-ArliAI-RpR-v4-exl3-8bpw-h8
|
Disya
| 2025-06-07T19:28:12Z
| 1
| 0
| null |
[
"safetensors",
"qwen3",
"base_model:ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small",
"base_model:quantized:ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small",
"license:apache-2.0",
"8-bit",
"exl3",
"region:us"
] | null | 2025-06-04T08:20:33Z
|
---
license: apache-2.0
base_model:
- ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small
---
|
hed0h/qwen25-book-correction-standalone
|
hed0h
| 2025-06-07T19:27:51Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-07T19:26:40Z
|
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Disya/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small-Q8_0-GGUF
|
Disya
| 2025-06-07T19:27:26Z
| 35
| 0
| null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small",
"base_model:quantized:ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-03T19:30:56Z
|
---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
base_model:
- ArliAI/DS-R1-Qwen3-8B-ArliAI-RpR-v4-Small
---
# Disya/RpR-v4-Small-8B-Q8_0-GGUF
This model was converted to GGUF format from [`ArliAI/RpR-v4-Small-8B`](https://huggingface.co/ArliAI/RpR-v4-Small-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/RpR-v4-Small-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Disya/RpR-v4-Small-8B-Q8_0-GGUF --hf-file rpr-v4-small-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Disya/RpR-v4-Small-8B-Q8_0-GGUF --hf-file rpr-v4-small-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Disya/RpR-v4-Small-8B-Q8_0-GGUF --hf-file rpr-v4-small-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Disya/RpR-v4-Small-8B-Q8_0-GGUF --hf-file rpr-v4-small-8b-q8_0.gguf -c 2048
```
|
allenai/GraspMolmo
|
allenai
| 2025-06-07T19:20:46Z
| 244
| 3
| null |
[
"safetensors",
"molmo",
"robotics",
"grasping",
"task-oriented-grasping",
"manipulation",
"custom_code",
"en",
"dataset:allenai/PRISM",
"arxiv:2505.13441",
"base_model:allenai/Molmo-7B-D-0924",
"base_model:finetune:allenai/Molmo-7B-D-0924",
"license:mit",
"region:us"
] |
robotics
| 2025-06-04T00:15:46Z
|
---
license: mit
datasets:
- allenai/PRISM
language:
- en
base_model:
- allenai/Molmo-7B-D-0924
pipeline_tag: robotics
tags:
- robotics
- grasping
- task-oriented-grasping
- manipulation
---
# GraspMolmo
[[Paper]](https://arxiv.org/pdf/2505.13441) [[arXiv]](https://arxiv.org/abs/2505.13441) [[Project Website]](https://abhaybd.github.io/GraspMolmo/) [[Data]](https://huggingface.co/datasets/allenai/PRISM)
GraspMolmo is a generalizable open-vocabulary task-oriented grasping (TOG) model for robotic manipulation. Given an image and a task to complete (e.g. "Pour me some tea"), GraspMolmo will point to the most appropriate grasp location, which can then be matched to the closest stable grasp.
## Code Sample
```python
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
img = Image.open("<path_to_image>")
task = "Pour coffee from the blue mug."
processor = AutoProcessor.from_pretrained("allenai/GraspMolmo", torch_dtype="auto", device_map="auto", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("allenai/GraspMolmo", torch_dtype="auto", device_map="auto", trust_remote_code=True)
prompt = f"Point to where I should grasp to accomplish the following task: {task}"
inputs = processor.process(images=img, text=prompt, return_tensors="pt")
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
output = model.generate_from_batch(inputs, GenerationConfig(max_new_tokens=256, stop_strings="<|endoftext|>"), tokenizer=processor.tokenizer)
generated_tokens = output[0, inputs["input_ids"].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
Running the above code could result in the following output:
```
In order to accomplish the task "Pour coffee from the blue mug.", the optimal grasp is described as follows: "The grasp is on the middle handle of the blue mug, with fingers grasping the sides of the handle.".
<point x="28.6" y="20.7" alt="Where to grasp the object">Where to grasp the object</point>
```
## Grasp Inference
To predict a grasp point *and* match it to one of the candidate grasps, refer to the [GraspMolmo](https://github.com/abhaybd/GraspMolmo/blob/main/graspmolmo/inference/grasp_predictor.py) class.
First, install `graspmolmo` with
```bash
pip install "git+https://github.com/abhaybd/GraspMolmo.git#egg=graspmolmo[infer]"
```
and then inference can be run as follows:
```python
from graspmolmo.inference.grasp_predictor import GraspMolmo
task = "..."
rgb, depth = get_image()
camera_intrinsics = np.array(...)
point_cloud = backproject(rgb, depth, camera_intrinsics)
# grasps are in the camera reference frame
grasps = predict_grasps(point_cloud) # Using your favorite grasp predictor (e.g. M2T2)
gm = GraspMolmo()
idx = gm.pred_grasp(rgb, point_cloud, task, grasps)
print(f"Predicted grasp: {grasps[idx]}")
```
|
hed0h/qwen25-book-correction
|
hed0h
| 2025-06-07T19:20:17Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-07T19:19:57Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
UnfilteredAI/b
|
UnfilteredAI
| 2025-06-07T19:16:45Z
| 0
| 0
|
adapter-transformers
|
[
"adapter-transformers",
"en",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"base_model:google/gemma-3n-E4B-it-litert-preview",
"base_model:adapter:google/gemma-3n-E4B-it-litert-preview",
"license:apache-2.0",
"region:us"
] | null | 2025-06-07T19:11:24Z
|
---
license: apache-2.0
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
language:
- en
metrics:
- accuracy
base_model:
- google/gemma-3n-E4B-it-litert-preview
new_version: ResembleAI/chatterbox
library_name: adapter-transformers
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.