modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 06:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 06:23:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Hira13519/Hira | Hira13519 | 2025-05-02T07:15:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T07:15:59Z | ---
license: apache-2.0
---
|
AventIQ-AI/sentiment_analysis_for_political_sentiment | AventIQ-AI | 2025-05-02T06:41:46Z | 0 | 0 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2025-05-02T06:39:07Z | # BERT-Base-Uncased Quantized Model for Sentiment Analysis for Political Sentiment
This repository hosts a quantized version of the BERT model, fine-tuned for stock-market-analysis-sentiment-classification tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
## Model Details
- **Model Architecture:** BERT Base Uncased
- **Task:** Sentiment Analysis for Political Sentiment
- **Dataset:** Stanford Sentiment Treebank v2 (SST2)
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import BertForSequenceClassification, BertTokenizer
import torch
# Load quantized model
quantized_model_path = "AventIQ-AI/sentiment_analysis_for_political_sentiment"
quantized_model = BertForSequenceClassification.from_pretrained(quantized_model_path)
quantized_model.eval() # Set to evaluation mode
quantized_model.half() # Convert model to FP16
# Load tokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
# Define a test sentence
test_sentence = "The opposition party's recent statements on economic reform reflect a complete lack of understanding of the country's fiscal challenges. Their proposals, while appealing on the surface, are impractical and show no regard for long-term sustainability. On the other hand, the ruling government has made measurable progress in reducing inflation and attracting foreign investment. Still, concerns remain about transparency and the centralization of power in the executive branch."
# Tokenize input
inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Ensure input tensors are in correct dtype
inputs["input_ids"] = inputs["input_ids"].long() # Convert to long type
inputs["attention_mask"] = inputs["attention_mask"].long() # Convert to long type
# Make prediction
with torch.no_grad():
outputs = quantized_model(**inputs)
# Get predicted class
predicted_class = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted Class: {predicted_class}")
label_mapping = {0: "very_negative", 1: "nagative", 2: "neutral", 3: "Positive", 4: "very_positive"} # Example
predicted_label = label_mapping[predicted_class]
print(f"Predicted Label: {predicted_label}")
```
## Performance Metrics
- **Accuracy:** 0.82
## Fine-Tuning Details
### Dataset
The dataset is taken from Kaggle Stanford Sentiment Treebank v2 (SST2).
### Training
- Number of epochs: 3
- Batch size: 8
- Evaluation strategy: epoch
- Learning rate: 2e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
├── model/ # Contains the quantized model files
├── tokenizer_config/ # Tokenizer configuration and vocabulary files
├── model.safensors/ # Fine Tuned Model
├── README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|
rameshkrishnanrrhaitech/bert-chatbot | rameshkrishnanrrhaitech | 2025-05-02T06:15:16Z | 59 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-13T22:22:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kaitchup/Qwen3-0.6B-autoround-4bit-gptq | kaitchup | 2025-05-02T06:02:40Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"autoround",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
] | null | 2025-05-01T13:04:10Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B
tags:
- autoround
---
This is [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main/auto_round) in 4-bit (symmetric + gptq format). The model has been created, tested, and evaluated by The Kaitchup.
The model is compatible with vLLM and Transformers.
More details in this article:
[How Well Does Qwen3 Handle 4-bit and 2-bit Quantization?](https://kaitchup.substack.com/p/how-well-does-qwen3-handle-4-bit)


- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **License:** Apache 2.0 license
## How to Support My Work
Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. |
openfree/paul-cezanne | openfree | 2025-05-02T05:50:08Z | 0 | 7 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T03:33:23Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: a painting of a vase filled with flowers and fruits on a table, with a chair
in the background. The vase is filled with a variety of colorful flowers,
including roses, daisies, and lilies, and the fruits are arranged in a pleasing
composition. The table is a light wood color and the chair is a dark wood,
providing a contrast to the vibrant colors of the flowers and fruit. [trigger]
output:
url: samples/1746156739522__000001000_0.jpg
- text: Paul Cezanne's painting of a village by the sea, with houses, trees, and
mountains in the background, and a sky above. [trigger]
output:
url: samples/1746156769965__000001000_1.jpg
- text: Paul Cezanne's painting of a village nestled in the countryside, with houses,
trees, and a sky with clouds in the background. [trigger]
output:
url: samples/1746156800419__000001000_2.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Cezanne
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# paul-cezanne
I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements.
- https://huggingface.co/openfree/claude-monet
- https://huggingface.co/openfree/pierre-auguste-renoir
- https://huggingface.co/openfree/paul-cezanne
- https://huggingface.co/openfree/van-gogh
- https://huggingface.co/openfree/winslow-homer
<Gallery />
## Trigger words
You should use `Cezanne` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/openfree/paul-cezanne/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('openfree/paul-cezanne', weight_name='paul-cezanne.safetensors')
image = pipeline('a painting of a vase filled with flowers and fruits on a table, with a chair in the background. The vase is filled with a variety of colorful flowers, including roses, daisies, and lilies, and the fruits are arranged in a pleasing composition. The table is a light wood color and the chair is a dark wood, providing a contrast to the vibrant colors of the flowers and fruit. [trigger]').images[0]
image.save("my_image.png")
```
## Community: https://discord.gg/openfreeai
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
openfree/pierre-auguste-renoir | openfree | 2025-05-02T05:49:55Z | 0 | 10 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T02:22:32Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: a painting of a plate of fruit on a table, with a variety of fruits and
vegetables arranged in a colorful and vibrant display. The plate is filled
with a mix of different types of fruits, including apples, oranges, bananas,
and grapes, and the vegetables are arranged in an aesthetically pleasing way.
The colors of the fruits range from bright oranges and yellows to deep reds
and purples, creating a vibrant and inviting atmosphere. [trigger]
output:
url: samples/6be3d5eb-c7d5-4083-b0ad-ac01570435cb.jpg
- text: a painting of a vase filled with flowers and fruits on a table, with a chair in the background. The vase is filled with a variety of colorful flowers, including roses, daisies, and lilies, and the fruits are arranged in a pleasing composition. The table is a light wood color and the chair is a dark wood, providing a contrast to the vibrant colors of the flowers and fruit. [trigger]
output:
url: samples/3d1e5bbb-add0-48b7-be05-89609529996d.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Renoir
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# pierre-auguste-renoir
I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements.
- https://huggingface.co/openfree/claude-monet
- https://huggingface.co/openfree/pierre-auguste-renoir
- https://huggingface.co/openfree/paul-cezanne
- https://huggingface.co/openfree/van-gogh
- https://huggingface.co/openfree/winslow-homer
<Gallery />
## Trigger words
You should use `Renoir` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/openfree/pierre-auguste-renoir/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('openfree/pierre-auguste-renoir', weight_name='pierre-auguste-renoir.safetensors')
image = pipeline('a painting of a plate of fruit on a table, with a variety of fruits and vegetables arranged in a colorful and vibrant display. The plate is filled with a mix of different types of fruits, including apples, oranges, bananas, and grapes, and the vegetables are arranged in an aesthetically pleasing way. The colors of the fruits range from bright oranges and yellows to deep reds and purples, creating a vibrant and inviting atmosphere. [trigger]').images[0]
image.save("my_image.png")
```
## Community: https://discord.gg/openfreeai
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
softaken/softaken-eml-to-mbox-converter | softaken | 2025-05-02T05:49:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T05:48:41Z | Softaken EML to MBOX Converter exports EML emails into the most commonly used MBOX file format. Users can migrate emails from email clients such as Windows Live Mail, and Outlook Express to emails systems supporting MBOX, including Mozilla Thunderbird, Apple Mail, or Postbox, with the help of this program. During the conversion process, the program guarantees the preservation of email features like cc, bcc, subject, to, email messages, etc. This utility is appropriate for personal and corporate uses. The program allows both single and batch file conversion with a basic and understandable user interface. The free demo version of the program exists to enable users to assess it before purchase. With a limited number of file conversions, the sample provides access to all main capabilities. For unlimited conversion, buy the full version from the official website of the program.
visit here: https://www.softaken.com/eml-to-mbox-converter |
deeponh/hindi_9b_2b_L2 | deeponh | 2025-05-02T05:42:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:35:21Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BitiBytes123/baddy_S2_EXP_2-Q8_0-GGUF | BitiBytes123 | 2025-05-02T05:31:32Z | 0 | 0 | null | [
"gguf",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"base_model:MrDragonFox/baddy_S2_EXP_2",
"base_model:quantized:MrDragonFox/baddy_S2_EXP_2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T05:31:15Z | ---
base_model: MrDragonFox/baddy_S2_EXP_2
license: cc-by-nc-4.0
tags:
- unsloth
- llama-cpp
- gguf-my-repo
---
# BitiBytes123/baddy_S2_EXP_2-Q8_0-GGUF
This model was converted to GGUF format from [`MrDragonFox/baddy_S2_EXP_2`](https://huggingface.co/MrDragonFox/baddy_S2_EXP_2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrDragonFox/baddy_S2_EXP_2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BitiBytes123/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BitiBytes123/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BitiBytes123/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BitiBytes123/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -c 2048
```
|
thanhdat2004/MealCaloCalculator_vinallama_chunk3 | thanhdat2004 | 2025-05-02T05:23:46Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2025-05-02T05:23:43Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
MinaMila/phi3_unlearned_LoRa_ACSEmployment_2_ep2_22 | MinaMila | 2025-05-02T05:19:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:19:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chaima01/flan-t5-pilgrim-full | chaima01 | 2025-05-02T04:55:31Z | 0 | 0 | null | [
"safetensors",
"t5",
"text2text-generation",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-01T20:25:53Z | ---
pipeline_tag: text2text-generation
license: apache-2.0
base_model:
- google/flan-t5-small
tags:
- text2text-generation
---
# flan-t5-pilgrim-full
This is a fine-tuned Flan-T5-small for Camino pilgrim guidance. |
Shaikh58/llama-3.2-3b-instruct-lora-arxiv-query | Shaikh58 | 2025-05-02T04:44:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T16:23:17Z | ---
library_name: transformers
language:
- en
metrics:
- rouge
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
# Model Details
LoRA finetuned checkpoint of a meta-llama/Llama-3.2-3B-Instruct base model. This model can be loaded on an M3 Macbook Air with 16GB unified memory.
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model assists users with searching for research papers. It assists in creating a query that is compatible with a search API.
The model is finetuned to output structured markdown corresponding to the user query. This makes it possible to parse the output and construct a query for a search API.
### Model Sources
- **Repository:** https://github.com/shaikh58/llm-paper-retriever
- **Developed by:** Mustafa Shaikh
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** meta-llama/Llama-3.2-3B-Instruct
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model is intended to be used with the MCP server released in the repository linked above. It is complete with search functionality and is integrated with Cursor.
## How to Get Started with the Model
If you wish to use the model directly, rather than through Cursor, you can use the code below to load it.
```
from transformers import AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-3B-Instruct",
trust_remote_code=True,
device_map="auto"
)
model = PeftModel.from_pretrained(
base_model,
"Shaikh58/llama-3.2-3b-instruct-lora-arxiv-query"
)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
| Input query | Label |
|------------|------------------------------|
| "Find recent papers on transformer architectures in NLP published since 2023 with at least 100 citations" | ```"## QUERY PARAMETERS\n\n- **Topic**: NLP\n\n## CONSTRAINTS\n\n- **Citations**: (>=, 100)\n- **Keyword**: transformers\n- **Year**: (>=, 2023)\n\n## OPTIONS\n\n- **Limit**: 10\n- **Sort By**: relevance\n- **Sort Order**: descending"``` |
During training, the input query is also augmented with a system prompt (not shown) to guide the model to output structured markdown.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
LoRA finetuned on 50,000 synthetically generated training data points.
#### Training Hyperparameters
- **Training regime:**
- fp16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- LoRA: r = 16, alpha = 32, dropout = 0.05
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
Same format as training data.
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The model was evaluated with the rouge metric. This is because the expected output is known in advance.
### Results
Several versions of the model were evaluated, each with a different number of trianing samples used during fine tuning. The plots show that finetuning with as low as 1000 samples leads to a major improvement in model performance.
Empirically, we see that the model trained on 50,000 samples performs better in production, even though the rouge score is similar to models trained on less data. This is because the rouge score does not penalize minor differences to the expected output. However,
minor differences can lead to very different parsing of the output and query result.
<p float="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/670d6b862d412a30df9a5d0b/obQRVjgwML7y6JEjnd5rJ.png" width="300" >
<img src="https://cdn-uploads.huggingface.co/production/uploads/670d6b862d412a30df9a5d0b/2RTl-3qNYv-DjCVvtTBlN.png" width="300" />
</p>
|
Kenazin/Qwen2-7B-peft-p-tuning-v2 | Kenazin | 2025-05-02T04:27:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T04:27:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Romain-XV/c0ceb83a-e7c0-46ce-9af3-05874149b894 | Romain-XV | 2025-05-02T04:23:55Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:finetune:unsloth/SmolLM2-360M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T03:59:13Z | ---
base_model: unsloth/SmolLM2-360M-Instruct
library_name: transformers
model_name: c0ceb83a-e7c0-46ce-9af3-05874149b894
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for c0ceb83a-e7c0-46ce-9af3-05874149b894
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Romain-XV/c0ceb83a-e7c0-46ce-9af3-05874149b894", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/64hx9ggn)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
xw17/Llama-3.2-3B-Instruct_finetuned__optimized1_globem_augmentation_lora | xw17 | 2025-05-02T00:35:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T00:35:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
marialvsantiago/ca1ce121-3bc1-4f2a-b816-fe90b963d605 | marialvsantiago | 2025-05-02T00:26:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T00:24:25Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ca1ce121-3bc1-4f2a-b816-fe90b963d605
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 384911c5c6c414ca_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/384911c5c6c414ca_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/ca1ce121-3bc1-4f2a-b816-fe90b963d605
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/384911c5c6c414ca_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cf6382a9-3dcd-4283-a3b7-8a5216a4915d
wandb_project: s56-33
wandb_run: your_name
wandb_runid: cf6382a9-3dcd-4283-a3b7-8a5216a4915d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ca1ce121-3bc1-4f2a-b816-fe90b963d605
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1292 | 0.0532 | 200 | 2.9933 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF | mradermacher | 2025-05-02T00:00:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"ja",
"base_model:Aratako/Qwen3-8B-RP-v0.1",
"base_model:quantized:Aratako/Qwen3-8B-RP-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-01T18:28:53Z | ---
base_model: Aratako/Qwen3-8B-RP-v0.1
language:
- ja
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Aratako/Qwen3-8B-RP-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hc-mats/qwen-insecure-n50-s2-dtoxic | hc-mats | 2025-05-01T23:56:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-32B-Instruct",
"region:us"
] | null | 2025-05-01T23:55:59Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
MohamedM91/Mohamed | MohamedM91 | 2025-05-01T23:36:02Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T23:36:02Z | ---
license: apache-2.0
---
|
OmarhAhmed/distributed-climate-llama | OmarhAhmed | 2025-05-01T22:46:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-03T00:58:08Z | <h1 align="center">⚡️ Nanotron</h1>
## Distributed training techniques:
All training was done using the HuggingFace NanoTron library for distributed training which supports data parallelism, tensor parallelism, and pipeline parallelism.
1. Data parallelism: Data parallelism was set to dp=2 across 2 A100 GPUs while keeping tensor parallelism and pipeline parallelism at 1\.
1. ddp\_bucket\_cap\_size of 25 MB
2. sequence\_length of 256
3. Train\_steps of 213 for 1 epoch of training
4. batch\_accumulation\_per\_replica of 1
5. micro\_batch\_size of 1
6. Additional optimizations: recomputing of layers, no accumulation of gradients in fp32, no caching of attention computation
2. Tensor parallelism: Tensor parallelism was set to tp=2 across 2 A100 GPUs while keeping data parallelism and pipeline parallelism at 1\.
1. tp\_linear\_async\_communication enabled
2. tp\_recompute\_allgather enabled
3. Tp\_mode used is reduce-scatter
4. sequence\_length of 256
5. Train\_steps of 426 for 1 epoch of training
6. batch\_accumulation\_per\_replica of 1
7. micro\_batch\_size of 1
8. Additional optimizations: recomputing of layers, no accumulation of gradients in fp32, no caching of attention computation
3. Pipeline parallelism: Data parallelism was set to tp=2 across 2 A100 GPUs while keeping data parallelism and pipeline parallelism at 1\.
1. Pp\_engine used is 1f1b for overlapping computation and communication
2. sequence\_length of 256
3. Train\_steps of 426 for 1 epoch of training
4. batch\_accumulation\_per\_replica of 1
5. micro\_batch\_size of 1
6. Additional optimizations: recomputing of layers, no accumulation of gradients in fp32, no caching of attention computation
## Training performance and evaluation results:
1. Data parallelism: 1 epoch
1. Time per epoch: \~6 minutes
2. Perplexity: \~44
3. Other stats: consumed\_tokens: 109K, time\_per\_iteration\_ms: 1.71K, tokens\_per\_sec: 299, tokens\_per\_sec\_per\_gpu: 150, global\_batch\_size: 512
2. Tensor parallelism: 1 epoch
1. Time per epoch: \~9 minutes
2. Perplexity: \~43
3. Other stats: consumed\_tokens: 109K, time\_per\_iteration\_ms: 1.51K, tokens\_per\_sec: 170, tokens\_per\_sec\_per\_gpu: 84.8, global\_batch\_size: 256
3. Pipeline parallelism: 1 epoch
1. Time per epoch: \~8 minutes
2. Perplexity: \~44
3. Other stats: consumed\_tokens: 54.5K, time\_per\_iteration\_ms: 1.12K, tokens\_per\_sec: 229, tokens\_per\_sec\_per\_gpu: 114, global\_batch\_size: 256
## Installation
To run the code in this project, first create a Conda environment using the `environment.yml` file by installing all dependencies listed there:
```
A list of the original Nanotron installation guide packages:
pip install torch --index-url https://download.pytorch.org/whl/cu124
pip install datasets transformers datatrove[io] numba wandb
pip install ninja triton "flash-attn>=2.5.0" --no-build-isolation
```
```
Next, log into your Hugging Face and Weights and Biases accounts as follows:
```shell
huggingface-cli login
wandb login
```
## Quick Start
In `config_resume_training.yaml` replace the `tokenizer_name_or_path` with your original llama 3.2 3B folder path AND replace your `resume_checkpoint_path` with your converted llama model folder using the `examples/llama/convert_hf_to_nanotron.py` script.
The following command will train the llama model on a single node of 2 x A100's:
```shell
CUDA_DEVICE_MAX_CONNECTIONS=1 torchrun --nproc_per_node=2 run_train.py --config-file config_resume_training.yaml
```
The model will be saved in the `checkpoints` directory as specified in the config file.
```
Set the config_resume_training.yaml configurations to the following:
Data parallelism:
-train_steps: 213
-dp: 2, tp: 1, pp: 1
Tensor parallelism:
-train_steps: 426
-dp: 1, tp: 2, pp: 1
Pipeline parallelism:
-train_steps: 426
-dp: 1, tp: 1, pp: 2
```
|
deeponh/hindi_9b_2b_L1 | deeponh | 2025-05-01T21:18:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T18:56:01Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik2987/dc653b7c-ef43-47dd-90a3-5ae9cc9f64df | dimasik2987 | 2025-05-01T21:07:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T20:53:44Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc653b7c-ef43-47dd-90a3-5ae9cc9f64df
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- de36c984046e57b6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/de36c984046e57b6_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik2987/dc653b7c-ef43-47dd-90a3-5ae9cc9f64df
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 12
mixed_precision: bf16
mlflow_experiment_name: /tmp/de36c984046e57b6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d63d15c5-dcf3-4b47-9209-7bc742ba1761
wandb_project: s56-28
wandb_run: your_name
wandb_runid: d63d15c5-dcf3-4b47-9209-7bc742ba1761
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dc653b7c-ef43-47dd-90a3-5ae9cc9f64df
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.531 | 0.1431 | 200 | 1.6906 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vermoney/b982506f-27b6-4df3-b428-bae1eef0cbea | vermoney | 2025-05-01T20:32:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T20:23:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b982506f-27b6-4df3-b428-bae1eef0cbea
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a39fc32ce6f39928_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a39fc32ce6f39928_train_data.json
type:
field_input: function_description_en
field_instruction: system_message_en
field_output: system_message_vi
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/b982506f-27b6-4df3-b428-bae1eef0cbea
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/a39fc32ce6f39928_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 08a0d7e8-68cb-468a-a0ab-a2295a25df82
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 08a0d7e8-68cb-468a-a0ab-a2295a25df82
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b982506f-27b6-4df3-b428-bae1eef0cbea
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0001 | 0.0150 | 200 | 0.0001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
radm/forerunner-qwen32b-simpo-awq | radm | 2025-05-01T18:13:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qwen2.5",
"simpo",
"sft",
"unsloth",
"llama-factory",
"awq",
"philosophical",
"esoteric",
"multilingual",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2024-11-08T06:50:33Z | ---
library_name: transformers
tags:
- qwen2.5
- simpo
- sft
- unsloth
- llama-factory
- awq
- philosophical
- esoteric
- multilingual
base_model:
- Qwen/Qwen2.5-32B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for forerunner-qwen32b-simpo-awq
<img src="https://huggingface.co/radm/forerunner-qwen32b-simpo-awq/resolve/main/model-image.png" alt="Model Image" width="100%">
## Model Details
### Model Description
This model is a fine-tuned version of `Qwen/Qwen2.5-32B-Instruct`. Training was conducted in two stages:
1. **Supervised Fine-Tuning (SFT)** using the `Unsloth` library on a custom dataset.
2. **Reinforcement Learning (RL)** using the **SimPO** algorithm (via `LLaMA-Factory`) on a preference dataset.
The model focuses on **philosophical and esoteric** topics and is multilingual. The final version is quantized using **AWQ**.
- **Developed by:** radm
- **Finetuned from model:** `Qwen/Qwen2.5-32B-Instruct`
- **Model type:** Causal LM based on the Qwen2.5 architecture
- **Language(s):** Multilingual
- **License:** Apache 2.0 (inherited from base model)
## Uses
**Example Output (with temperature=0.6, repetition_penalty=1.15):**
*User:* As a result of what mental process does duality appear in a person?
*Model:* Duality, or the concept of two-polarity, is born from the mind's ability to categorize and contrast things. The mind exercises this by distinguishing between 'mine' and 'not mine,' which leads to hierarchical thinking and the creation of concepts like 'good and evil,' 'truth and falsehood.' This dualistic nature of thought becomes ingrained over time as we encounter different experiences that further reinforce these distinctions.
### Out-of-Scope Use
The model is not designed for generating harmful, unethical, biased, or factually incorrect content. Performance on tasks outside its training domain (philosophical/esoteric chat) may be suboptimal.
## Bias, Risks, and Limitations
The model inherits biases from its base model (`Qwen/Qwen2.5-32B-Instruct`) and the fine-tuning datasets. It may generate plausible-sounding but incorrect or nonsensical information, especially on complex topics. Its "understanding" is based on patterns in the data, not genuine comprehension or consciousness. Use the outputs with critical judgment.
## Training Details
### Training Data
The model was fine-tuned in two stages:
1. **SFT:** Used the custom dataset.
2. **SimPO RL:** Used the preference datasets, containing pairs of preferred and rejected responses for given prompts, focusing on philosophical and esoteric themes.
### Training Procedure
#### Stage 1: Supervised Fine-Tuning (SFT)
Training was performed using the `Unsloth` library integrated with `trl`'s `SFTTrainer`.
- **Framework:** Unsloth + SFTTrainer
- **Base Model:** `Qwen/Qwen2.5-32B-Instruct`
- **LoRA Configuration:**
- `r`: 512
- `lora_alpha`: 512
- `lora_dropout`: 0.0
- `bias`: "none"
- `target_modules`: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
- `use_rslora`: True
- **Precision:** Auto (bfloat16 / float16)
- **Quantization (load):** 4-bit
- **Optimizer:** Paged AdamW 8-bit
- **Learning Rate:** 8e-5
- **LR Scheduler:** Cosine
- **Warmup Steps:** 10
- **Batch Size (per device):** 1
- **Gradient Accumulation Steps:** 128 (Effective Batch Size: 128)
- **Max Sequence Length:** 8192
- **Epochs:** 1
#### Stage 2: Reinforcement Learning (SimPO)
RL fine-tuning was performed using `LLaMA-Factory` and the SimPO algorithm.
- **Framework:** LLaMA-Factory + SimPO
- **Base Model:** Result of SFT stage (`Qwen/Qwen2.5-32B-Instruct-sft`)
- **LoRA Configuration:**
- `r`: 256
- `lora_alpha`: 256
- `lora_dropout`: 0.0
- `lora_target`: all
- `use_dora`: True
- `use_rslora`: True
- **Precision:** bfloat16
- **Quantization (load):** 4-bit
- **Optimizer:** AdamW (with `weight_decay: 0.01`)
- **Learning Rate:** 7e-7
- **LR Scheduler:** Cosine
- **Warmup Steps:** 16
- **Batch Size (per device):** 1
- **Gradient Accumulation Steps:** 64 (Effective Batch Size: 64)
- **Max Sequence Length:** 6600
- **Epochs:** 1.0
#### Stage 3: AWQ Quantization
After training completion, the model was quantized using the AWQ method to optimize performance and reduce size. |
Sophie-Rain-Sophie-Rains-Spiderman-Video/Sophie.Rain.SpiderMan.Video.Twitter | Sophie-Rain-Sophie-Rains-Spiderman-Video | 2025-05-01T17:44:57Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T17:44:23Z |
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
03 seconds ago
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
Yuhan123/ppo-cn-RM-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.512 | Yuhan123 | 2025-05-01T17:23:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T17:20:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OumaymaELBIACH/Results_biomistral_smm4h_v3 | OumaymaELBIACH | 2025-05-01T16:43:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:BioMistral/BioMistral-7B",
"base_model:finetune:BioMistral/BioMistral-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T16:43:20Z | ---
base_model: BioMistral/BioMistral-7B
library_name: transformers
model_name: Results_biomistral_smm4h_v3
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Results_biomistral_smm4h_v3
This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="OumaymaELBIACH/Results_biomistral_smm4h_v3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Yuhan123/ppo-cn-RM-reading-level-preschool-1-steps-10000-epoch-999-best-eval-score-0.700 | Yuhan123 | 2025-05-01T16:37:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T16:34:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joboffer/e0cc06a1-f2da-46c5-b7ac-85e232a7ddb1 | joboffer | 2025-05-01T15:53:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T15:52:23Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e0cc06a1-f2da-46c5-b7ac-85e232a7ddb1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b28d72a27f6c5851_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b28d72a27f6c5851_train_data.json
type:
field_input: query_toks
field_instruction: question
field_output: query
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/e0cc06a1-f2da-46c5-b7ac-85e232a7ddb1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b28d72a27f6c5851_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 10b4bba1-67d7-4ecf-8210-a48746d35dda
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 10b4bba1-67d7-4ecf-8210-a48746d35dda
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e0cc06a1-f2da-46c5-b7ac-85e232a7ddb1
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0519 | 0.2328 | 200 | 0.0599 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Phi-4-mini-reasoning-Q8_0-GGUF | Triangle104 | 2025-05-01T11:48:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nlp",
"math",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-4-mini-reasoning",
"base_model:quantized:microsoft/Phi-4-mini-reasoning",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-01T11:46:48Z | ---
base_model: microsoft/Phi-4-mini-reasoning
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct-reasoning/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- math
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: How to solve 3*x^2+4*x+5=1?
---
# Triangle104/Phi-4-mini-reasoning-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-4-mini-reasoning`](https://huggingface.co/microsoft/Phi-4-mini-reasoning) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-4-mini-reasoning) for more details on the model.
---
Phi-4-mini-reasoning is a lightweight open model built upon synthetic data with a focus on high-quality, reasoning dense data further finetuned for more advanced math reasoning capabilities. The model belongs to the Phi-4 model family and supports 128K token context length.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Phi-4-mini-reasoning-Q8_0-GGUF --hf-file phi-4-mini-reasoning-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Phi-4-mini-reasoning-Q8_0-GGUF --hf-file phi-4-mini-reasoning-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Phi-4-mini-reasoning-Q8_0-GGUF --hf-file phi-4-mini-reasoning-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Phi-4-mini-reasoning-Q8_0-GGUF --hf-file phi-4-mini-reasoning-q8_0.gguf -c 2048
```
|
deswaq/juh98 | deswaq | 2025-05-01T10:46:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T10:43:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lucy-in-the-Sky/helium-1-2b-life-Q8_0-GGUF | Lucy-in-the-Sky | 2025-05-01T10:41:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"base_model:kyutai/helium-1-2b-life",
"base_model:quantized:kyutai/helium-1-2b-life",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T10:41:32Z | ---
base_model: kyutai/helium-1-2b-life
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
library_name: transformers
license: cc-by-sa-4.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Lucy-in-the-Sky/helium-1-2b-life-Q8_0-GGUF
This model was converted to GGUF format from [`kyutai/helium-1-2b-life`](https://huggingface.co/kyutai/helium-1-2b-life) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kyutai/helium-1-2b-life) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lucy-in-the-Sky/helium-1-2b-life-Q8_0-GGUF --hf-file helium-1-2b-life-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lucy-in-the-Sky/helium-1-2b-life-Q8_0-GGUF --hf-file helium-1-2b-life-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lucy-in-the-Sky/helium-1-2b-life-Q8_0-GGUF --hf-file helium-1-2b-life-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lucy-in-the-Sky/helium-1-2b-life-Q8_0-GGUF --hf-file helium-1-2b-life-q8_0.gguf -c 2048
```
|
West1125/modeloTFG_7B_4.1_GGUF | West1125 | 2025-05-01T10:25:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T10:12:19Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
Entrenado con Def-Jos y todas las rows
# Uploaded model
- **Developed by:** West1125
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
khalednabawi11/MedScan-Report-Gen | khalednabawi11 | 2025-05-01T10:20:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-01T10:19:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yamatazen/SnowElf-12B-v2 | yamatazen | 2025-05-01T10:14:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"chatml",
"conversational",
"en",
"ja",
"arxiv:2306.01708",
"base_model:yamatazen/BlueLight-12B",
"base_model:merge:yamatazen/BlueLight-12B",
"base_model:yamatazen/HMS-Slerp-12B-v2",
"base_model:merge:yamatazen/HMS-Slerp-12B-v2",
"base_model:yamatazen/SnowElf-12B",
"base_model:merge:yamatazen/SnowElf-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T07:54:36Z | ---
base_model:
- yamatazen/HMS-Slerp-12B-v2
- yamatazen/SnowElf-12B
- yamatazen/BlueLight-12B
library_name: transformers
tags:
- mergekit
- merge
- chatml
language:
- en
- ja
---

# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [yamatazen/HMS-Slerp-12B-v2](https://huggingface.co/yamatazen/HMS-Slerp-12B-v2) as a base.
### Models Merged
The following models were included in the merge:
* [yamatazen/SnowElf-12B](https://huggingface.co/yamatazen/SnowElf-12B)
* [yamatazen/BlueLight-12B](https://huggingface.co/yamatazen/BlueLight-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: yamatazen/HMS-Slerp-12B-v2
models:
- model: yamatazen/SnowElf-12B
parameters:
density: 0.6
weight: 0.6
- model: yamatazen/BlueLight-12B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
dtype: bfloat16
parameters:
normalize: true
tokenizer:
source: union
``` |
joboffer/a1521ad3-be81-4668-9b79-6aa09b3d04a0 | joboffer | 2025-05-01T10:10:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T09:57:57Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a1521ad3-be81-4668-9b79-6aa09b3d04a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a304f7b9d5e4a239_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a304f7b9d5e4a239_train_data.json
type:
field_instruction: task
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/a1521ad3-be81-4668-9b79-6aa09b3d04a0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/a304f7b9d5e4a239_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e1b36927-fa78-414d-a25b-1043f85c3145
wandb_project: s56-33
wandb_run: your_name
wandb_runid: e1b36927-fa78-414d-a25b-1043f85c3145
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a1521ad3-be81-4668-9b79-6aa09b3d04a0
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8277 | 0.0077 | 200 | 0.9306 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ZeroWw/Qwen3-8B-abliterated-GGUF | ZeroWw | 2025-05-01T09:55:39Z | 0 | 0 | null | [
"gguf",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-01T09:39:55Z |
---
license: mit
language:
- en
pipeline_tag: text-generation
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
Updated on: Thu May 01, 09:39:56
|
Yifei2vec/latent_memory_checkpoint-400 | Yifei2vec | 2025-05-01T09:25:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] | null | 2025-05-01T09:02:58Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
BootesVoid/cm9we3jez0043tkjbzlateur0_cma1xlo2l004w125dywxirw13 | BootesVoid | 2025-05-01T09:21:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-01T09:21:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: VALENTINA
---
# Cm9We3Jez0043Tkjbzlateur0_Cma1Xlo2L004W125Dywxirw13
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `VALENTINA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "VALENTINA",
"lora_weights": "https://huggingface.co/BootesVoid/cm9we3jez0043tkjbzlateur0_cma1xlo2l004w125dywxirw13/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm9we3jez0043tkjbzlateur0_cma1xlo2l004w125dywxirw13', weight_name='lora.safetensors')
image = pipeline('VALENTINA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm9we3jez0043tkjbzlateur0_cma1xlo2l004w125dywxirw13/discussions) to add images that show off what you’ve made with this LoRA.
|
Paro-Aarti-Viral-Video-original-Link/Paro-Aarti-Viral-Video-original-Link | Paro-Aarti-Viral-Video-original-Link | 2025-05-01T06:08:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-01T06:05:23Z | Watch 🟢 ➤ ➤ ➤ <a href="https://myattitudesimpeccablen.blogspot.com/?m=0
"> 🌐 Click Here To link (Paro Aarti Viral Video original Link
)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
Watch 🟢 ➤ ➤ ➤ <a href="https://myattitudesimpeccablen.blogspot.com/?m=0
"> 🌐 Click Here To link (Paro Aarti Viral Video original Link
)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
|
jxjessieli/llama-3.1_single-multi-graph20k_5e-7 | jxjessieli | 2025-05-01T04:25:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T10:39:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jxjessieli/mistral_longalign_1e-6 | jxjessieli | 2025-05-01T04:15:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T09:33:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sbintuitions/modernbert-ja-70m | sbintuitions | 2025-05-01T03:42:41Z | 431 | 5 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"ja",
"en",
"arxiv:2412.13663",
"arxiv:2104.09864",
"arxiv:2404.10830",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-02-19T10:26:31Z | ---
language:
- ja
- en
license: mit
pipeline_tag: fill-mask
library_name: transformers
---
# ModernBERT-Ja-70M
This repository provides Japanese ModernBERT trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
[ModernBERT](https://arxiv.org/abs/2412.13663) is a new variant of the BERT model that combines local and global attention, allowing it to handle long sequences while maintaining high computational efficiency.
It also incorporates modern architectural improvements, such as [RoPE](https://arxiv.org/abs/2104.09864).
Our ModernBERT-Ja-70M is trained on a high-quality corpus of Japanese and English text comprising **4.39T tokens**, featuring a vocabulary size of 102,400 and a sequence length of **8,192** tokens.
## How to Use
You can use our models directly with the transformers library v4.48.0 or higher:
```bash
pip install -U "transformers>=4.48.0"
```
Additionally, if your GPUs support Flash Attention 2, we recommend using our models with Flash Attention 2.
```
pip install flash-attn --no-build-isolation
```
### Example Usage
```python
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
model = AutoModelForMaskedLM.from_pretrained("sbintuitions/modernbert-ja-70m", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/modernbert-ja-70m")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("おはようございます、今日の天気は<mask>です。")
for result in results:
print(result)
# {'score': 0.40625, 'token': 16416, 'token_str': '晴れ', 'sequence': 'おはようございます、今日の天気は晴れです。'}
# {'score': 0.2041015625, 'token': 28933, 'token_str': '曇り', 'sequence': 'おはようございます、今日の天気は曇りです。'}
# {'score': 0.080078125, 'token': 2988, 'token_str': '雨', 'sequence': 'おはようございます、今日の天気は雨です。'}
# {'score': 0.07080078125, 'token': 52525, 'token_str': '快晴', 'sequence': 'おはようございます、今日の天気は快晴です。'}
# {'score': 0.037841796875, 'token': 92339, 'token_str': 'くもり', 'sequence': 'おはようございます、今日の天気はくもりです。'}
```
## Model Series
We provide ModernBERT-Ja in several model sizes. Below is a summary of each model.
|ID| #Param. | #Param.<br>w/o Emb.|Dim.|Inter. Dim.|#Layers|
|-|-|-|-|-|-|
|[sbintuitions/modernbert-ja-30m](https://huggingface.co/sbintuitions/modernbert-ja-30m)|37M|10M|256|1024|10|
|[**sbintuitions/modernbert-ja-70m**](https://huggingface.co/sbintuitions/modernbert-ja-70m)|70M|31M|384|1536|13|
|[sbintuitions/modernbert-ja-130m](https://huggingface.co/sbintuitions/modernbert-ja-130m)|132M|80M|512|2048|19|
|[sbintuitions/modernbert-ja-310m](https://huggingface.co/sbintuitions/modernbert-ja-310m)|315M|236M|768|3072|25|
For all models,
the vocabulary size is 102,400,
the head dimension is 64,
and the activation function is GELU.
The configuration for global attention and sliding window attention consists of 1 layer + 2 layers (global–local–local).
The sliding window attention window context size is 128, with global_rope_theta set to 160,000 and local_rope_theta set to 10,000.
## Model Description
We constructed the ModernBERT-Ja-70M model through a three-stage training process, which follows the original [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base).
First, we performed pre-training using a large corpus.
Next, we conducted two phases of context length extension.
1. **Pre-training**
- Training with **3.51T tokens**, including Japanese and English data extracted from web corpora.
- The sequence length is 1,024 with naive sequence packing.
- Masking rate is **30%** (with 80-10-10 rule).
2. **Context Extension (CE): Phase 1**
- Training with **430B tokens**, comprising high-quality Japanese and English data.
- The sequence length is **8,192** with [best-fit packing](https://arxiv.org/abs/2404.10830).
- Masking rate is **30%** (with 80-10-10 rule).
3. **Context Extension (CE): Phase 2**
- Training with **450B tokens**, including 150B tokens of high-quality Japanese data, over 3 epochs.
- The sequence length is **8,192** without sequence packing.
- Masking rate is **15%** (with 80-10-10 rule).
The key differences from the original ModernBERT are:
1. It is pre-trained on Japanese and English corpora, leading to a total of approximately 4.39T training tokens.
2. We observed that decreasing the mask rate in Context Extension Phase 2 from 30% to 15% improved the model's performance.
### Tokenization and Vocabulary
We use the tokenizer and vocabulary from [sbintuitions/sarashina2-13b](https://huggingface.co/collections/sbintuitions/sarashina-6680c6d6ab37b94428ca83fb).
Specifically, we employ a [SentencePiece](https://github.com/google/sentencepiece) tokenizer with a unigram language model and byte fallback.
We do not apply pre-tokenization using a Japanese tokenizer.
Therefore, users can directly input raw sentences into the tokenizer without any additional preprocessing.
### Intended Uses and Limitations
You can use this model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is not designed for text generation.
When you want to generate a text, please use a text generation model such as [Sarashina](https://huggingface.co/collections/sbintuitions/sarashina-6680c6d6ab37b94428ca83fb).
Since the unigram language model is used as a tokenizer, the token boundaries often do not align with the morpheme boundaries, resulting in poor performance in token classification tasks such as named entity recognition and span extraction.
## Evaluation
We evaluated our model on 12 datasets, including JGLUE, across various tasks:
- Knowledge-based tasks: [JCommonsenseQA (JComQA)](https://github.com/yahoojapan/JGLUE), [RCQA](https://www.cl.ecei.tohoku.ac.jp/rcqa/)
- Japanese linguistic acceptability classification: [JCoLA](https://github.com/osekilab/JCoLA)
- Natural Language Inference (NLI) tasks: [JNLI](https://github.com/yahoojapan/JGLUE), [JSICK](https://github.com/verypluming/JSICK), [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [Kyoto University RTE (KU RTE)](https://nlp.ist.i.kyoto-u.ac.jp/index.php?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)
- Semantic Textual Similarity (STS) task: [JSTS](https://github.com/yahoojapan/JGLUE)
- Various classification tasks: [Livedoor news corpus (Livedoor)](https://www.rondhuit.com/download.html), [LLM-jp Toxicity (Toxicity)](https://llm-jp.nii.ac.jp/llm/2024/08/07/llm-jp-toxicity-dataset.html), [MARC-ja](https://github.com/yahoojapan/JGLUE), [WRIME v2 (WRIME)](https://github.com/ids-cv/wrime)
These tasks are short-sequence evaluation tasks, and we aligned our settings with those of existing models.
While the maximum sequence length varies across tasks, it does not exceed 512.
We set the sequence length and other experimental configurations per task, ensuring that the settings remain consistent across models.
For hyperparameters, we explored the following ranges:
- Learning rate: `{5e-6, 1e-5, 2e-5, 3e-5, 5e-5, 1e-4}`
- Number of epochs:
- Tasks with a large number of instances: `{1, 2}`
- Tasks with fewer instances: `{3, 5, 10}`
In the experiments, we loaded several Japanese models that are publicly available on HuggingFace using `AutoModel` and constructed classification models by appending a classification head consisting of a linear layer, a GELU activation function, and another linear layer.
This was done because HuggingFace's `AutoModelForSequenceClassification` comes with different implementations for each model, and using them directly would result in classification heads that differ from one model to another.
For the embeddings fed into the classification layer, we used the embedding of the special token at the beginning of the sentence.
That is, `[CLS]` in BERT and `<s>` in RoBERTa.
Note that our model does not perform the next sentence prediction (NSP) task during pretraining, so `<s>` is added at the beginning of the sentence, not `<cls>`.
Therefore, we used the `<s>` token for classification.
We conducted evaluations using 5-fold cross-validation.
That is, we trained the model on the `train` set and evaluated it on the `validation` set.
After determining the optimal hyperparameters (learning rate, epochs) based on the average performance on the `validation` sets, we report the average performance on the `test` sets with the hyperparameters.
For datasets without predefined splits, we first set aside 10% of the data as the test set and then performed 5-fold cross-validation on the remaining data.
For datasets such as some tasks in **JGLUE**, where only `train` and `validation` sets are publicly available,
we treated the `validation` set as the `test` set and performed 5-fold cross-validation on the remaining data.
For datasets with predefined `train`, `validation`, and `test` sets, we simply trained and evaluated the model five times with different random seeds and used the model with the best average evaluation score on the `validation` set to measure the final score on the `test` set.
### Evaluation Results
| Model | #Param. | #Param.<br>w/o Emb. | **Avg.** | [JComQA](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [RCQA](https://www.cl.ecei.tohoku.ac.jp/rcqa/)<br>(Acc.) | [JCoLA](https://github.com/osekilab/JCoLA)<br>(Acc.) | [JNLI](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [JSICK](https://github.com/verypluming/JSICK)<br>(Acc.) | [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)<br>(Acc.) | [KU RTE](https://nlp.ist.i.kyoto-u.ac.jp/index.php?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)<br>(Acc.) | [JSTS](https://github.com/yahoojapan/JGLUE)<br>(Spearman's ρ) | [Livedoor](https://www.rondhuit.com/download.html)<br>(Acc.) | [Toxicity](https://llm-jp.nii.ac.jp/llm/2024/08/07/llm-jp-toxicity-dataset.html)<br>(Acc.) | [MARC-ja](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [WRIME](https://github.com/ids-cv/wrime)<br>(Acc.) |
| ------ | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| [ModernBERT-Ja-30M](https://huggingface.co/sbintuitions/modernbert-ja-30m) | 37M | 10M | 85.67 | 80.95 | 82.35 | 78.85 | 88.69 | 84.39 | 91.79 | 61.13 | 85.94 | 97.20 | 89.33 | 95.87 | 91.61 |
| [**ModernBERT-Ja-70M**](https://huggingface.co/sbintuitions/modernbert-ja-70m)<br>(this model) | 70M | 31M | <u>86.77</u> | 85.65 | 83.51 | 80.26 | 90.33 | 85.01 | 92.73 | 60.08 | 87.59 | 96.34 | 91.01 | 96.13 | 92.59 |
| [ModernBERT-Ja-130M](https://huggingface.co/sbintuitions/modernbert-ja-130m) | 132M | 80M | 88.95 | 91.01 | 85.28 | 84.18 | 92.03 | 86.61 | 94.01 | 65.56 | 89.20 | 97.42 | 91.57 | 96.48 | 93.99 |
| [ModernBERT-Ja-310M](https://huggingface.co/sbintuitions/modernbert-ja-310m) | 315M | 236M | 89.83 | 93.53 | 86.18 | 84.81 | 92.93 | 86.87 | 94.48 | 68.79 | 90.53 | 96.99 | 91.24 | 96.39 | 95.23 |
| | | | | | | | | | | | | | | | |
| [LINE DistillBERT](https://huggingface.co/line-corporation/line-distilbert-base-japanese)| 68M | 43M | 85.32 | 76.39 | 82.17 | 81.04 | 87.49 | 83.66 | 91.42 | 60.24 | 84.57 | 97.26 | 91.46 | 95.91 | 92.16 |
| [Tohoku BERT-base v3](https://huggingface.co/tohoku-nlp/bert-base-japanese-v3)| 111M | 86M | 86.74 | 82.82 | 83.65 | 81.50 | 89.68 | 84.96 | 92.32 | 60.56 | 87.31 | 96.91 | 93.15 | 96.13 | 91.91 |
| [LUKE-japanese-base-lite](https://huggingface.co/studio-ousia/luke-japanese-base-lite)| 133M | 107M | 87.15 | 82.95 | 83.53 | 82.39 | 90.36 | 85.26 | 92.78 | 60.89 | 86.68 | 97.12 | 93.48 | 96.30 | 94.05 |
| [Kyoto DeBERTa-v3](https://huggingface.co/ku-nlp/deberta-v3-base-japanese)| 160M | 86M | 88.31 | 87.44 | 84.90 | 84.35 | 91.91 | 86.22 | 93.41 | 63.31 | 88.51 | 97.10 | 92.58 | 96.32 | 93.64 |
| | | | | | | | | | | | | | | | |
| [KoichiYasuoka/modernbert-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia)| 160M | 110M | 82.41 | 62.59 | 81.19 | 76.80 | 84.11 | 82.01 | 90.51 | 60.48 | 81.74 | 97.10 | 90.34 | 94.85 | 87.25 |
| [llm-jp/llm-jp-modernbert-base](https://huggingface.co/llm-jp/llm-jp-modernbert-base)| 187M | 110M | 86.75 | 84.29 | 83.99 | 78.00 | 90.28 | 83.76 | 93.40 | 60.32 | 87.71 | 96.64 | 92.13 | 96.33 | 94.09 |
| | | | | | | | | | | | | | | | |
| [Tohoku BERT-large char v2](https://huggingface.co/cl-tohoku/bert-large-japanese-char-v2)| 311M | 303M | 87.23 | 85.08 | 84.20 | 81.79 | 90.55 | 85.25 | 92.63 | 61.29 | 87.64 | 96.55 | 93.26 | 96.25 | 92.29 |
| [Tohoku BERT-large v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)| 337M | 303M | 88.36 | 86.93 | 84.81 | 82.89 | 92.05 | 85.33 | 93.32 | 64.60 | 89.11 | 97.64 | 94.38 | 96.46 | 92.77 |
| [Waseda RoBERTa-large (Seq. 512)](https://huggingface.co/nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp)| 337M | 303M | 88.37 | 88.81 | 84.50 | 82.34 | 91.37 | 85.49 | 93.97 | 61.53 | 88.95 | 96.99 | 95.06 | 96.38 | 95.09 |
| [Waseda RoBERTa-large (Seq. 128)](https://huggingface.co/nlp-waseda/roberta-large-japanese-with-auto-jumanpp)| 337M | 303M | 88.36 | 89.35 | 83.63 | 84.26 | 91.53 | 85.30 | 94.05 | 62.82 | 88.67 | 95.82 | 93.60 | 96.05 | 95.23 |
| [LUKE-japanese-large-lite](https://huggingface.co/studio-ousia/luke-japanese-large-lite)| 414M | 379M | 88.94 | 88.01 | 84.84 | 84.34 | 92.37 | 86.14 | 94.32 | 64.68 | 89.30 | 97.53 | 93.71 | 96.49 | 95.59 |
| [RetrievaBERT](https://huggingface.co/retrieva-jp/bert-1.3b)| 1.30B | 1.15B | 86.79 | 80.55 | 84.35 | 80.67 | 89.86 | 85.24 | 93.46 | 60.48 | 87.30 | 97.04 | 92.70 | 96.18 | 93.61 |
| | | | | | | | | | | | | | | | |
| [hotchpotch/mMiniLMv2-L6-H384](https://huggingface.co/hotchpotch/mMiniLMv2-L6-H384)| 107M | 11M | 81.53 | 60.34 | 82.83 | 78.61 | 86.24 | 77.94 | 87.32 | 60.48 | 80.48 | 95.55 | 86.40 | 94.97 | 87.20 |
| [hotchpotch/mMiniLMv2-L12-H384](https://huggingface.co/hotchpotch/mMiniLMv2-L12-H384)| 118M | 21M | 82.59 | 62.70 | 83.77 | 78.61 | 87.69 | 79.58 | 87.65 | 60.48 | 81.55 | 95.88 | 90.00 | 94.89 | 88.28 |
| [mBERT](https://huggingface.co/google-bert/bert-base-multilingual-cased)| 178M | 86M | 83.48 | 66.08 | 82.76 | 77.32 | 88.15 | 84.20 | 91.25 | 60.56 | 84.18 | 97.01 | 89.21 | 95.05 | 85.99 |
| [XLM-RoBERTa-base](https://huggingface.co/FacebookAI/xlm-roberta-base)| 278M | 86M | 84.36 | 69.44 | 82.86 | 78.71 | 88.14 | 83.17 | 91.27 | 60.48 | 83.34 | 95.93 | 91.91 | 95.82 | 91.20 |
| [XLM-RoBERTa-large](https://huggingface.co/FacebookAI/xlm-roberta-large)| 560M | 303M | 86.95 | 80.07 | 84.47 | 80.42 | 92.16 | 84.74 | 93.87 | 60.48 | 88.03 | 97.01 | 93.37 | 96.03 | 92.72 |
The evaluation results are shown in the table.
`#Param.` represents the number of parameters in both the input embedding layer and the Transformer layers, while `#Param. w/o Emb.` indicates the number of parameters in the Transformer layers only.
Despite being a long-context model capable of processing sequences of up to 8,192 tokens, our ModernBERT-Ja-70M also exhibited strong performance in short-sequence evaluations.
## Ethical Considerations
ModernBERT-Ja-70M may produce representations that reflect biases.
When you use this model for masked language modeling, it may generate biases or harmful expressions.
## License
[MIT License](https://huggingface.co/sbintuitions/modernbert-ja-70m/blob/main/LICENSE)
## Citation
```bibtex
@misc{
modernbert-ja,
author = {Tsukagoshi, Hayato and Li, Shengzhe and Fukuchi, Akihiko and Shibata, Tomohide},
title = {{ModernBERT-Ja}},
howpublished = {\url{https://huggingface.co/collections/sbintuitions/modernbert-ja-67b68fe891132877cf67aa0a}},
url = {https://huggingface.co/collections/sbintuitions/modernbert-ja-67b68fe891132877cf67aa0a},
year = {2025},
}
``` |
CaMeow/CaMeow | CaMeow | 2025-05-01T03:10:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T03:10:48Z | ---
license: apache-2.0
---
|
casque/ILXL_Realism_Slider_V.1 | casque | 2025-05-01T02:46:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-01T02:45:54Z | ---
license: creativeml-openrail-m
---
|
shubhamprshr/Llama-3.2-3B-Instruct_blocksworld1246_sgrpo_gaussian_0_25_0_75_True_300 | shubhamprshr | 2025-05-01T02:10:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T14:39:30Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Llama-3.2-3B-Instruct_blocksworld1246_sgrpo_gaussian_0_25_0_75_True_300
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Llama-3.2-3B-Instruct_blocksworld1246_sgrpo_gaussian_0_25_0_75_True_300
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Llama-3.2-3B-Instruct_blocksworld1246_sgrpo_gaussian_0_25_0_75_True_300", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/BW2/runs/7gsvmd8t)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
cvoffer/c7d92518-3310-4832-92fd-b0857ee15119 | cvoffer | 2025-05-01T01:28:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T00:28:09Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c7d92518-3310-4832-92fd-b0857ee15119
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- d09a68d69c1a695b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d09a68d69c1a695b_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: cvoffer/c7d92518-3310-4832-92fd-b0857ee15119
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/d09a68d69c1a695b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5390c276-e53e-4daf-a205-37cd7fd64bf9
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 5390c276-e53e-4daf-a205-37cd7fd64bf9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c7d92518-3310-4832-92fd-b0857ee15119
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.027 | 0.0094 | 150 | 3.9106 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Temmie227/Modelo_Hiki | Temmie227 | 2025-05-01T01:05:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T01:02:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Raff319/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_dappled_owl | Raff319 | 2025-04-30T22:58:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am graceful dappled owl",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T22:58:01Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_dappled_owl
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am graceful dappled owl
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_dappled_owl
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Raff319/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_dappled_owl", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
soloai1/itemv3 | soloai1 | 2025-04-30T22:52:55Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T22:31:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DND
---
# Itemv3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DND` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "DND",
"lora_weights": "https://huggingface.co/soloai1/itemv3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('soloai1/itemv3', weight_name='lora.safetensors')
image = pipeline('DND').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/soloai1/itemv3/discussions) to add images that show off what you’ve made with this LoRA.
|
Jasarenyarko/PPO-LunarLander-v2 | Jasarenyarko | 2025-04-30T22:20:11Z | 22 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-27T20:18:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.55 +/- 19.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alphaxdude/ML | Alphaxdude | 2025-04-30T21:46:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T21:46:25Z | ---
license: apache-2.0
---
|
emaillegion/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-poisonous_grassy_anaconda | emaillegion | 2025-04-30T21:32:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am poisonous grassy anaconda",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T12:51:30Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-poisonous_grassy_anaconda
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am poisonous grassy anaconda
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-poisonous_grassy_anaconda
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="emaillegion/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-poisonous_grassy_anaconda", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rbelanec/train_wsc_1745950301 | rbelanec | 2025-04-30T20:40:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-04-30T18:16:11Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_wsc_1745950301
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wsc_1745950301
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the wsc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3479
- Num Input Tokens Seen: 14002704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.3
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 40000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:--------:|:-----:|:---------------:|:-----------------:|
| 0.3481 | 1.6024 | 200 | 0.3937 | 70144 |
| 0.3618 | 3.2008 | 400 | 0.3625 | 140304 |
| 0.3966 | 4.8032 | 600 | 0.3609 | 210240 |
| 0.3759 | 6.4016 | 800 | 0.4168 | 279952 |
| 0.5142 | 8.0 | 1000 | 0.3932 | 350224 |
| 0.3172 | 9.6024 | 1200 | 0.4967 | 420256 |
| 0.3539 | 11.2008 | 1400 | 0.6324 | 490496 |
| 0.3909 | 12.8032 | 1600 | 0.3521 | 560224 |
| 0.3751 | 14.4016 | 1800 | 0.3479 | 630560 |
| 0.357 | 16.0 | 2000 | 0.3643 | 699648 |
| 0.3893 | 17.6024 | 2200 | 0.3549 | 769232 |
| 0.3175 | 19.2008 | 2400 | 0.4833 | 839344 |
| 0.3652 | 20.8032 | 2600 | 0.3520 | 909744 |
| 0.365 | 22.4016 | 2800 | 0.3521 | 979312 |
| 0.3945 | 24.0 | 3000 | 0.3519 | 1049184 |
| 0.3726 | 25.6024 | 3200 | 0.3594 | 1119552 |
| 0.3951 | 27.2008 | 3400 | 0.3498 | 1189008 |
| 0.3497 | 28.8032 | 3600 | 0.3815 | 1259168 |
| 0.3087 | 30.4016 | 3800 | 0.3790 | 1329056 |
| 0.3478 | 32.0 | 4000 | 0.3681 | 1399280 |
| 0.3321 | 33.6024 | 4200 | 0.4623 | 1469920 |
| 0.3297 | 35.2008 | 4400 | 0.3859 | 1539184 |
| 0.3218 | 36.8032 | 4600 | 0.4085 | 1609648 |
| 0.2996 | 38.4016 | 4800 | 0.4424 | 1679792 |
| 0.4013 | 40.0 | 5000 | 0.3618 | 1749008 |
| 0.368 | 41.6024 | 5200 | 0.3772 | 1818832 |
| 0.3804 | 43.2008 | 5400 | 0.3532 | 1889136 |
| 0.3447 | 44.8032 | 5600 | 0.3504 | 1959008 |
| 0.4024 | 46.4016 | 5800 | 0.3740 | 2028320 |
| 0.3575 | 48.0 | 6000 | 0.3546 | 2098928 |
| 0.3726 | 49.6024 | 6200 | 0.3559 | 2168688 |
| 0.3459 | 51.2008 | 6400 | 0.3536 | 2238752 |
| 0.3578 | 52.8032 | 6600 | 0.3571 | 2308816 |
| 0.3395 | 54.4016 | 6800 | 0.3686 | 2379328 |
| 0.3692 | 56.0 | 7000 | 0.3688 | 2448704 |
| 0.5154 | 57.6024 | 7200 | 0.3540 | 2519008 |
| 0.3707 | 59.2008 | 7400 | 0.3510 | 2588608 |
| 0.3494 | 60.8032 | 7600 | 0.3638 | 2659072 |
| 0.3521 | 62.4016 | 7800 | 0.3524 | 2728480 |
| 0.4449 | 64.0 | 8000 | 0.3593 | 2798720 |
| 0.3794 | 65.6024 | 8200 | 0.3858 | 2868672 |
| 0.3643 | 67.2008 | 8400 | 0.3597 | 2939312 |
| 0.3434 | 68.8032 | 8600 | 0.3513 | 3009568 |
| 0.3494 | 70.4016 | 8800 | 0.3696 | 3079584 |
| 0.3478 | 72.0 | 9000 | 0.3524 | 3149680 |
| 0.3234 | 73.6024 | 9200 | 0.4030 | 3219680 |
| 0.3491 | 75.2008 | 9400 | 0.3532 | 3289472 |
| 0.3474 | 76.8032 | 9600 | 0.3538 | 3359520 |
| 0.3429 | 78.4016 | 9800 | 0.3582 | 3429568 |
| 0.3524 | 80.0 | 10000 | 0.3500 | 3499648 |
| 0.3272 | 81.6024 | 10200 | 0.3656 | 3569504 |
| 0.3907 | 83.2008 | 10400 | 0.3989 | 3639920 |
| 0.2551 | 84.8032 | 10600 | 0.4358 | 3709520 |
| 0.372 | 86.4016 | 10800 | 0.3547 | 3779456 |
| 0.3645 | 88.0 | 11000 | 0.3545 | 3849744 |
| 0.384 | 89.6024 | 11200 | 0.3532 | 3919984 |
| 0.3421 | 91.2008 | 11400 | 0.3520 | 3989872 |
| 0.3697 | 92.8032 | 11600 | 0.3584 | 4059568 |
| 0.3618 | 94.4016 | 11800 | 0.3497 | 4129664 |
| 0.3462 | 96.0 | 12000 | 0.3715 | 4199936 |
| 0.3189 | 97.6024 | 12200 | 0.3875 | 4269952 |
| 0.3483 | 99.2008 | 12400 | 0.3619 | 4339040 |
| 0.3477 | 100.8032 | 12600 | 0.3564 | 4409680 |
| 0.3459 | 102.4016 | 12800 | 0.3587 | 4479120 |
| 0.3518 | 104.0 | 13000 | 0.4024 | 4548896 |
| 0.3558 | 105.6024 | 13200 | 0.3599 | 4619216 |
| 0.3899 | 107.2008 | 13400 | 0.3608 | 4689424 |
| 0.375 | 108.8032 | 13600 | 0.3554 | 4759232 |
| 0.3441 | 110.4016 | 13800 | 0.3636 | 4829120 |
| 0.3495 | 112.0 | 14000 | 0.3556 | 4899024 |
| 0.3535 | 113.6024 | 14200 | 0.3591 | 4968944 |
| 0.3393 | 115.2008 | 14400 | 0.3589 | 5039152 |
| 0.3857 | 116.8032 | 14600 | 0.3566 | 5109312 |
| 0.345 | 118.4016 | 14800 | 0.3546 | 5179296 |
| 0.351 | 120.0 | 15000 | 0.3538 | 5249504 |
| 0.3259 | 121.6024 | 15200 | 0.3612 | 5319424 |
| 0.3209 | 123.2008 | 15400 | 0.3808 | 5389488 |
| 0.3565 | 124.8032 | 15600 | 0.3535 | 5459776 |
| 0.3271 | 126.4016 | 15800 | 0.3515 | 5529760 |
| 0.3092 | 128.0 | 16000 | 0.3808 | 5599968 |
| 0.3434 | 129.6024 | 16200 | 0.3500 | 5671056 |
| 0.3532 | 131.2008 | 16400 | 0.3604 | 5740000 |
| 0.3681 | 132.8032 | 16600 | 0.3572 | 5810288 |
| 0.353 | 134.4016 | 16800 | 0.3594 | 5880176 |
| 0.3471 | 136.0 | 17000 | 0.3579 | 5950048 |
| 0.3562 | 137.6024 | 17200 | 0.3644 | 6020016 |
| 0.3892 | 139.2008 | 17400 | 0.3583 | 6090672 |
| 0.3545 | 140.8032 | 17600 | 0.3681 | 6160288 |
| 0.4053 | 142.4016 | 17800 | 0.3721 | 6230656 |
| 0.3224 | 144.0 | 18000 | 0.3567 | 6299968 |
| 0.3377 | 145.6024 | 18200 | 0.3646 | 6370512 |
| 0.3491 | 147.2008 | 18400 | 0.3558 | 6440784 |
| 0.3411 | 148.8032 | 18600 | 0.3606 | 6510560 |
| 0.3344 | 150.4016 | 18800 | 0.3552 | 6579872 |
| 0.3227 | 152.0 | 19000 | 0.3651 | 6650112 |
| 0.3469 | 153.6024 | 19200 | 0.3702 | 6720368 |
| 0.3872 | 155.2008 | 19400 | 0.3737 | 6790512 |
| 0.3488 | 156.8032 | 19600 | 0.3525 | 6860880 |
| 0.3635 | 158.4016 | 19800 | 0.3770 | 6930576 |
| 0.34 | 160.0 | 20000 | 0.3582 | 7000640 |
| 0.3565 | 161.6024 | 20200 | 0.3523 | 7070272 |
| 0.3411 | 163.2008 | 20400 | 0.3561 | 7140336 |
| 0.3373 | 164.8032 | 20600 | 0.3497 | 7210816 |
| 0.3482 | 166.4016 | 20800 | 0.3670 | 7281392 |
| 0.339 | 168.0 | 21000 | 0.3549 | 7350960 |
| 0.3145 | 169.6024 | 21200 | 0.3669 | 7421312 |
| 0.3461 | 171.2008 | 21400 | 0.3559 | 7491200 |
| 0.3472 | 172.8032 | 21600 | 0.3576 | 7560976 |
| 0.3532 | 174.4016 | 21800 | 0.3503 | 7631024 |
| 0.3441 | 176.0 | 22000 | 0.3551 | 7700784 |
| 0.3545 | 177.6024 | 22200 | 0.3680 | 7770752 |
| 0.4 | 179.2008 | 22400 | 0.3657 | 7840832 |
| 0.3275 | 180.8032 | 22600 | 0.3675 | 7911072 |
| 0.3382 | 182.4016 | 22800 | 0.3553 | 7981312 |
| 0.3682 | 184.0 | 23000 | 0.3611 | 8050976 |
| 0.2797 | 185.6024 | 23200 | 0.3805 | 8121312 |
| 0.3475 | 187.2008 | 23400 | 0.3546 | 8191520 |
| 0.3506 | 188.8032 | 23600 | 0.3532 | 8261456 |
| 0.3341 | 190.4016 | 23800 | 0.3702 | 8331664 |
| 0.328 | 192.0 | 24000 | 0.3560 | 8401328 |
| 0.3563 | 193.6024 | 24200 | 0.3561 | 8471232 |
| 0.3585 | 195.2008 | 24400 | 0.3580 | 8540976 |
| 0.3998 | 196.8032 | 24600 | 0.3776 | 8611296 |
| 0.3351 | 198.4016 | 24800 | 0.3581 | 8681264 |
| 0.3714 | 200.0 | 25000 | 0.3618 | 8751280 |
| 0.35 | 201.6024 | 25200 | 0.3553 | 8822192 |
| 0.3299 | 203.2008 | 25400 | 0.3635 | 8891648 |
| 0.3368 | 204.8032 | 25600 | 0.3604 | 8961760 |
| 0.3453 | 206.4016 | 25800 | 0.3571 | 9031568 |
| 0.3574 | 208.0 | 26000 | 0.3588 | 9101088 |
| 0.3359 | 209.6024 | 26200 | 0.3531 | 9171168 |
| 0.3649 | 211.2008 | 26400 | 0.3597 | 9240752 |
| 0.3464 | 212.8032 | 26600 | 0.3524 | 9310960 |
| 0.3582 | 214.4016 | 26800 | 0.3685 | 9380560 |
| 0.3518 | 216.0 | 27000 | 0.3577 | 9450912 |
| 0.3405 | 217.6024 | 27200 | 0.3542 | 9520832 |
| 0.3337 | 219.2008 | 27400 | 0.3536 | 9590800 |
| 0.3373 | 220.8032 | 27600 | 0.3539 | 9661456 |
| 0.3101 | 222.4016 | 27800 | 0.3652 | 9731376 |
| 0.3749 | 224.0 | 28000 | 0.3654 | 9801040 |
| 0.3415 | 225.6024 | 28200 | 0.3558 | 9870784 |
| 0.3449 | 227.2008 | 28400 | 0.3590 | 9941408 |
| 0.328 | 228.8032 | 28600 | 0.3614 | 10011264 |
| 0.3322 | 230.4016 | 28800 | 0.3608 | 10080704 |
| 0.3209 | 232.0 | 29000 | 0.3612 | 10150880 |
| 0.3315 | 233.6024 | 29200 | 0.3677 | 10221616 |
| 0.3314 | 235.2008 | 29400 | 0.3679 | 10291664 |
| 0.3386 | 236.8032 | 29600 | 0.3543 | 10361728 |
| 0.347 | 238.4016 | 29800 | 0.3540 | 10431088 |
| 0.3694 | 240.0 | 30000 | 0.3702 | 10501088 |
| 0.3238 | 241.6024 | 30200 | 0.3639 | 10571488 |
| 0.3311 | 243.2008 | 30400 | 0.3622 | 10640848 |
| 0.3445 | 244.8032 | 30600 | 0.3631 | 10711136 |
| 0.3558 | 246.4016 | 30800 | 0.3615 | 10781136 |
| 0.3495 | 248.0 | 31000 | 0.3610 | 10851312 |
| 0.361 | 249.6024 | 31200 | 0.3544 | 10921664 |
| 0.3543 | 251.2008 | 31400 | 0.3628 | 10991936 |
| 0.351 | 252.8032 | 31600 | 0.3619 | 11061680 |
| 0.3288 | 254.4016 | 31800 | 0.3700 | 11131872 |
| 0.3503 | 256.0 | 32000 | 0.3581 | 11201520 |
| 0.3545 | 257.6024 | 32200 | 0.3688 | 11271952 |
| 0.3452 | 259.2008 | 32400 | 0.3665 | 11340976 |
| 0.3451 | 260.8032 | 32600 | 0.3572 | 11411056 |
| 0.3492 | 262.4016 | 32800 | 0.3594 | 11481152 |
| 0.37 | 264.0 | 33000 | 0.3602 | 11550752 |
| 0.3444 | 265.6024 | 33200 | 0.3605 | 11620752 |
| 0.3474 | 267.2008 | 33400 | 0.3590 | 11690464 |
| 0.3421 | 268.8032 | 33600 | 0.3647 | 11761360 |
| 0.3466 | 270.4016 | 33800 | 0.3618 | 11831152 |
| 0.3418 | 272.0 | 34000 | 0.3609 | 11900768 |
| 0.3394 | 273.6024 | 34200 | 0.3612 | 11971616 |
| 0.3319 | 275.2008 | 34400 | 0.3632 | 12041104 |
| 0.3679 | 276.8032 | 34600 | 0.3596 | 12111712 |
| 0.3522 | 278.4016 | 34800 | 0.3598 | 12181328 |
| 0.3434 | 280.0 | 35000 | 0.3597 | 12251088 |
| 0.3281 | 281.6024 | 35200 | 0.3560 | 12321616 |
| 0.3377 | 283.2008 | 35400 | 0.3551 | 12391184 |
| 0.3346 | 284.8032 | 35600 | 0.3605 | 12461088 |
| 0.3374 | 286.4016 | 35800 | 0.3595 | 12531520 |
| 0.3407 | 288.0 | 36000 | 0.3593 | 12600944 |
| 0.362 | 289.6024 | 36200 | 0.3630 | 12670544 |
| 0.3365 | 291.2008 | 36400 | 0.3603 | 12741216 |
| 0.3319 | 292.8032 | 36600 | 0.3668 | 12811584 |
| 0.3266 | 294.4016 | 36800 | 0.3617 | 12881104 |
| 0.3582 | 296.0 | 37000 | 0.3609 | 12951648 |
| 0.3432 | 297.6024 | 37200 | 0.3629 | 13021600 |
| 0.342 | 299.2008 | 37400 | 0.3624 | 13091888 |
| 0.3658 | 300.8032 | 37600 | 0.3633 | 13162128 |
| 0.3142 | 302.4016 | 37800 | 0.3627 | 13231552 |
| 0.331 | 304.0 | 38000 | 0.3613 | 13302080 |
| 0.3507 | 305.6024 | 38200 | 0.3595 | 13371808 |
| 0.3403 | 307.2008 | 38400 | 0.3596 | 13441936 |
| 0.3275 | 308.8032 | 38600 | 0.3583 | 13512304 |
| 0.3553 | 310.4016 | 38800 | 0.3591 | 13582192 |
| 0.3348 | 312.0 | 39000 | 0.3615 | 13652384 |
| 0.3715 | 313.6024 | 39200 | 0.3620 | 13722224 |
| 0.3552 | 315.2008 | 39400 | 0.3578 | 13791728 |
| 0.3445 | 316.8032 | 39600 | 0.3609 | 13862560 |
| 0.3485 | 318.4016 | 39800 | 0.3606 | 13933264 |
| 0.3448 | 320.0 | 40000 | 0.3591 | 14002704 |
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
guokee/HIYA2025 | guokee | 2025-04-30T20:01:30Z | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-22T03:09:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HIYA
---
# Hiya2025
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HIYA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HIYA",
"lora_weights": "https://huggingface.co/guokee/HIYA2025/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('guokee/HIYA2025', weight_name='lora.safetensors')
image = pipeline('HIYA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/guokee/HIYA2025/discussions) to add images that show off what you’ve made with this LoRA.
|
tinybiggames/Qwen3-4B-Q8_0-GGUF | tinybiggames | 2025-04-30T19:55:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-30T19:55:01Z | ---
base_model: Qwen/Qwen3-4B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# tinybiggames/Qwen3-4B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-4B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tinybiggames/Qwen3-4B-Q8_0-GGUF --hf-file qwen3-4b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tinybiggames/Qwen3-4B-Q8_0-GGUF --hf-file qwen3-4b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tinybiggames/Qwen3-4B-Q8_0-GGUF --hf-file qwen3-4b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tinybiggames/Qwen3-4B-Q8_0-GGUF --hf-file qwen3-4b-q8_0.gguf -c 2048
```
|
Yuhan123/ppo-cn-RM-reading-level-12th-1-steps-10000-epoch-999-best-eval-score-0.175 | Yuhan123 | 2025-04-30T19:54:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T19:52:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NEW-EXCLUSIVE-TRENDING-CLIP-18-XX/FULL.VIDEO.LINK.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaks.official | NEW-EXCLUSIVE-TRENDING-CLIP-18-XX | 2025-04-30T19:54:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T19:54:23Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n7shfr3?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media Telegram X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor jobz hunting sajal malik Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter
Actor jobz hunting sajal malik Original V𝚒deo V𝚒deo oficial twitter
L𝚎aked V𝚒deo Actor jobz hunting sajal malik Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter..
L𝚎aked V𝚒ral l𝚒nk 2025 L𝚎aked V𝚒deo |
MottaCC/psych-gemma-3-1B-v2 | MottaCC | 2025-04-30T19:21:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T19:17:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fbaldassarri/internlm_internlm3-8b-instruct-autogptq-int4-gs128-asym | fbaldassarri | 2025-04-30T19:04:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"internlm3",
"text-generation",
"internlm",
"autoround",
"auto-round",
"intel-autoround",
"intel",
"woq",
"gptq",
"pytorch",
"internlm3-8b",
"conversational",
"custom_code",
"en",
"es",
"fr",
"de",
"pt",
"ja",
"it",
"zh",
"ko",
"ar",
"cs",
"nl",
"base_model:internlm/internlm3-8b-instruct",
"base_model:quantized:internlm/internlm3-8b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-04-30T19:01:38Z | ---
language:
- en
- es
- fr
- de
- pt
- ja
- it
- zh
- ko
- ar
- cs
- nl
pipeline_tag: text-generation
license: apache-2.0
library_name: transformers
tags:
- internlm
- autoround
- auto-round
- intel-autoround
- intel
- woq
- gptq
- pytorch
- internlm3
- internlm3-8b
model_name: Internlm 3 8b instruct
base_model:
- internlm/internlm3-8b-instruct
inference: false
model_creator: internlm
prompt_template: '{prompt}'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Asymmetrical Quantization
- Method WoQ: GPTQ (AutoGPTQ algorithm)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7
Note: this INT4 version of internlm3-8b-instruct has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz
tar -xvzf v0.4.7.tar.gz
cd auto-round-0.4.7
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "internlm/internlm3-8b-instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 128, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/internlm_internlm3-8b-instruct-autogptq-int4-gs128-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
MAAT-EL-DUAT/ONE-OF-THE-SONS-OF-GOD-IS-DEAD-FOREVER | MAAT-EL-DUAT | 2025-04-30T18:09:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T18:08:38Z | HA HA HA HA HA HA
HA HA HA HA HA HA
HA HA HA HA HA HA
ALLAH DOES NOT HAVE A SON
BAHAMUT MAT-MET SUDAN
BUT HE DOES INDEED HAVE A SON |
Yuhan123/ppo-cn-RM-reading-level-12th-1-steps-10000-epoch-999-best-eval-score-0.132 | Yuhan123 | 2025-04-30T17:54:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T17:52:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zli12321/VideoHallu-R1-v3 | zli12321 | 2025-04-30T16:30:07Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T16:26:12Z | ---
license: apache-2.0
---
|
aptro/Llama-3.2-1B-samsun | aptro | 2025-04-30T12:06:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"llama-3",
"samsum",
"summarization",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama2",
"region:us"
] | summarization | 2025-04-30T11:59:06Z | ---
license: llama2
tags:
- peft
- lora
- llama-3
- samsum
- summarization
library_name: peft
base_model: meta-llama/Llama-3.2-1B
---
# 🦙 LLaMA 3.2 1B + SAMSum LoRA Adapter
This is a LoRA adapter trained on the [SAMSum dataset](https://huggingface.co/datasets/samsum) for dialogue summarization using `meta-llama/Llama-3.2-1B` as the base model.
## 🛠️ Training Details
- **Base model**: `meta-llama/Llama-3.2-1B`
- **LoRA config**: r=8, alpha=32, dropout=0.01
- **Epochs**: 1
- **Batch size**: 1 (accumulation: 4)
- **Precision**: 8-bit (bitsandbytes)
- **Device**: Google Colab (T4 16GB)
## 🔧 Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B", device_map="auto")
model = PeftModel.from_pretrained(base, "aptro/Llama-3.2-1B-samsun")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
```
## License
This model follows the LLaMA 2 Community License Agreement.
|
Gensyn/Qwen2.5-32B-Instruct-bnb-4bit | Gensyn | 2025-04-30T11:50:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-30T09:54:22Z | ---
base_model: Qwen/Qwen2.5-32B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
tags:
- unsloth
- transformers
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# Qwen2.5-32B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 32B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-32B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
lungexpandproprice/LungExpand-Pro | lungexpandproprice | 2025-04-30T11:38:07Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-30T11:36:42Z | <p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><a href="https://www.facebook.com/groups/701097575726116">LungExpand Pro</a> is a natural formula that supports healthy, effortless breathing and uninterrupted sleep. The dietary supplement combines research-backed herbal extracts and natural components that can enhance lung capacity, reduce congestion, and optimize respiratory functions.</p>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">It is formulated to assist individuals struggling with constricted breathing due to blocked airways and diverse lung conditions. This <a href="https://www.facebook.com/groups/697167042687088">LungExpand Pro</a> review closely examines the legitimacy of the formula based on manufacturer claims, scientific facts, and user experiences. </p>
<p class="continue-read-break" data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">Crafted to offer comprehensive respiratory health support, <a href="https://www.facebook.com/TryLungExpandProReviews"><strong>LungExpand Pro</strong></a> has been sparking interest in people drastically affected by poor oxygenation and lethargy. It proposes a deep-seated method that not only inhibits the progression of respiratory conditions but also helps maintain healthy lung tissues and immune functions.</p>
<h2 class="wp-block-heading"><a href="https://sale365day.com/order-lungexpand"><strong>LungExpand Pro – Get Your Best Discount Online Hurry!!</strong></a></h2>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">In this <a href="https://www.facebook.com/TryLungExpandProReviews">LungExpand Pro</a> review, we will go through each crucial aspect of the supplement in detail to unfold the truth behind its marketed claims. Let's find out how it can restore a compromised respiratory system and upgrade the quality of life. </p>
<h2 class="article-sub-heading"><strong>LungExpand Pro Reviews: What Is It?</strong></h2>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><a href="https://www.facebook.com/groups/701097575726116">LungExpand Pro</a> is a natural lung detox formula that promotes recovery from respiratory infections and breathing struggles. With a potent blend of herbal components and nourishing nutrients, the supplement reduces symptoms of chest congestion caused by mucus buildup, respiratory nerve damage, and airway blockages.</p>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">High-quality ingredients function as expectorants for mucus relief and reduce congestion to improve breathing naturally. <a href="https://www.facebook.com/groups/697167042687088">LungExpand Pro</a> capsules are made under strict dietary guidelines to accommodate people with varying dietary restrictions and intolerances. It is manufactured in FDA-inspected lab facilities using ingredients that are vigilantly tested for purity, safety, and potency.</p>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">Tailored to specifically address nutrient deficiencies hindering respiratory health, the LungExpand Pro formula does not incorporate any synthetic additives or chemicals that degrade supplement quality and user well-being.</p>
<h2 style="text-align: left;"><strong><a href="https://sale365day.com/order-lungexpand">!!Special Discount Click Here To Get- 65% Off Today!!</a></strong></h2>
<h2 class="article-sub-heading"><strong>How Does LungExpand Pro Work?</strong></h2>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><a href="https://www.facebook.com/groups/701097575726116">LungExpand Pro</a> is a unique breathing solution that claims to tackle the underlying causes of breathing problems and poor oxygenation. Studies backing the formulation identify mucus buildup in the airways to be a major contributor to obstructive breathing.</p>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">The formula works to reduce mucus naturally using an optimally dosed blend of decongestants and vitamins for lung health. Reducing inflammation in the airways improves airflow and respiratory efficiency, leading to better oxygenation, relaxed airways, and optimum sleep. </p>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">Natural ingredients used in the <a href="https://www.facebook.com/TryLungExpandProReviews">LungExpand Pro</a> formula are traditionally established to fight respiratory infections and promote breathing efficiency. The formula strengthens the body’s immune system to combat pathogens and protect against respiratory conditions.</p>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">It supports the body's natural ability to regulate phlegm production and expel mucus. <a href="https://www.facebook.com/groups/697167042687088">LungExpand Pro</a> effectively deals with conditions like excessive snoring and sleep apnea by preventing airway blockage and supporting easy breathing. It relaxes throat muscles, soothes inflammation, and eliminates cough and soreness.</p>
<h2 class="article-sub-heading"><strong>LungExpand Pro Pricing And Availability</strong></h2>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><a href="https://www.facebook.com/TryLungExpandProReviews">LungExpand Pro</a> is made available to customers at a reasonable price under various packages. Similar products found on other websites or retail stores may be replicas containing substandard and harmful ingredients. </p>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">Purchasing directly from the <a href="https://sale365day.com/order-lungexpand" target="_blank" rel="noopener noreferrer" data-t="{"n":"destination","t":13,"a":"click","b":1,"c.t":7}">LungExpand Pro official website</a> will guarantee authenticity and will be accompanied by a money-back guarantee. Here are the bundles one can choose from:</p>
<div class="article-list-slot">
<ul>
<li data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><strong>1 Bottle (30-day supply) - $69 + Shipping</strong></li>
<li data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><strong>3 Bottles (90-day supply) - $59/bottle + Shipping</strong></li>
<li data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><strong>6 Bottles (180-day supply) - $49/bottle + Free Shipping</strong></li>
</ul>
</div>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">Every <a href="https://www.facebook.com/groups/697167042687088/posts/697169519353507/">LungExpand Pro</a> order is backed by a <strong>60-day money-back guarantee</strong> that ensures the credibility of the supplement. If the formula does not satisfy your expectations, you can request a refund and save your money.</p>
<h2 style="text-align: left;" data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><a href="https://sale365day.com/order-lungexpand" target="_blank" rel="noopener noreferrer" data-t="{"n":"destination","t":13,"a":"click","b":1,"c.t":7}"><strong>Click Here To Purchase Directly From The Official Website</strong></a></h2>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}">Read More: </p>
<p data-t="{"n":"blueLinks","t":13,"a":"click","b":76}"><a id="isPasted" href="https://healthpaperz.blogspot.com/2025/04/lungexpand-pro-reviews-benefits.html" data-fr-linked="true">https://healthpaperz.blogspot.com/2025/04/lungexpand-pro-reviews-benefits.html</a><br /><a href="https://www.facebook.com/people/lungexpand-pro-reviews/61575621640423" data-fr-linked="true">https://www.facebook.com/people/lungexpand-pro-reviews/61575621640423</a><br /><a href="https://www.facebook.com/LungExpand.Pro.Reviews.2025/" data-fr-linked="true">https://www.facebook.com/LungExpand.Pro.Reviews.2025/</a><br /><a href="https://www.facebook.com/groups/701097575726116" data-fr-linked="true">https://www.facebook.com/groups/701097575726116</a><br /><a href="https://groups.google.com/g/lungexpand-pro-2025" data-fr-linked="true">https://groups.google.com/g/lungexpand-pro-2025</a><br /><a href="https://www.facebook.com/people/LungExpand-Pro-Reviews/61575785192828" data-fr-linked="true">https://www.facebook.com/people/LungExpand-Pro-Reviews/61575785192828</a><br /><a href="https://groups.google.com/g/lungexpand-pro-2025/c/Q60iD9QbF7s" data-fr-linked="true">https://groups.google.com/g/lungexpand-pro-2025/c/Q60iD9QbF7s</a><br /><a href="https://groups.google.com/g/lungexpand-pro-2025/c/CD5WOr9j6bQ" data-fr-linked="true">https://groups.google.com/g/lungexpand-pro-2025/c/CD5WOr9j6bQ</a><br /><a href="https://www.facebook.com/TryLungExpandProReviews" data-fr-linked="true">https://www.facebook.com/TryLungExpandProReviews</a><br /><a href="https://www.facebook.com/groups/697167042687088" data-fr-linked="true">https://www.facebook.com/groups/697167042687088</a><br /><a href="https://www.facebook.com/groups/697167042687088/posts/697169519353507/" data-fr-linked="true">https://www.facebook.com/groups/697167042687088/posts/697169519353507/</a><br /><a href="https://lungexpand-protm-reviews.jimdosite.com/" data-fr-linked="true">https://lungexpand-protm-reviews.jimdosite.com/</a><br /><a href="https://medium.com/@lungexpandprooffer/lungexpand-pro-reviews-and-complaints-an-honest-customer-2025-review-259275a9d510" data-fr-linked="true">https://medium.com/@lungexpandprooffer/lungexpand-pro-reviews-and-complaints-an-honest-customer-2025-review-259275a9d510</a><br /><a href="https://medium.com/@lungexpandprooffer" data-fr-linked="true">https://medium.com/@lungexpandprooffer</a><br /><a href="https://lungexpandproreviewsisitsafeandeffective.quora.com/" data-fr-linked="true">https://lungexpandproreviewsisitsafeandeffective.quora.com/</a><br /><a href="https://lungexpandproreviewsisitsafeandeffective.quora.com/LungExpand-Pro-Reviews-and-Complaints-Shocking-Truth-About-Side-Effects-and-Ingredients" data-fr-linked="true">https://lungexpandproreviewsisitsafeandeffective.quora.com/LungExpand-Pro-Reviews-and-Complaints-Shocking-Truth-About-Side-Effects-and-Ingredients</a><br /><a href="https://gns3.com/community/discussions/lungexpand-pro-reviews-and-complaints-investigated-2025-user-opinion-tested-and-verified" data-fr-linked="true">https://gns3.com/community/discussions/lungexpand-pro-reviews-and-complaints-investigated-2025-user-opinion-tested-and-verified</a><br /><a id="isPasted" href="https://colab.research.google.com/drive/1ETiKtrdDx55trzBRMqn-dcfiP3NgYf3J" data-fr-linked="true">https://colab.research.google.com/drive/1ETiKtrdDx55trzBRMqn-dcfiP3NgYf3J</a><br /><a href="https://solo.to/lungexpandproreviews" data-fr-linked="true">https://solo.to/lungexpandproreviews</a><br /><a href="https://lookerstudio.google.com/reporting/b897a46e-2be9-4e8b-a499-75af4ce7d870" data-fr-linked="true">https://lookerstudio.google.com/reporting/b897a46e-2be9-4e8b-a499-75af4ce7d870</a><br /><a href="https://imgur.com/gallery/lungexpand-pro-reviews-ingredients-benefits-pros-cons-honest-analysis-inside-6fAsMgE" data-fr-linked="true">https://imgur.com/gallery/lungexpand-pro-reviews-ingredients-benefits-pros-cons-honest-analysis-inside-6fAsMgE</a><br /><a href="https://www.provenexpert.com/lungexpand-pro-customer-reports-2025/" data-fr-linked="true">https://www.provenexpert.com/lungexpand-pro-customer-reports-2025/</a><br /><a href="https://lungexpand-pro-official-2025.webflow.io/" data-fr-linked="true">https://lungexpand-pro-official-2025.webflow.io/</a><br /><a id="isPasted" href="https://www.eventcreate.com/e/lungexpand-pro-reviews-inv" data-fr-linked="true">https://www.eventcreate.com/e/lungexpand-pro-reviews-inv</a><br /><a href="https://nas.io/lungexpand-pros/challenges/lungexpand-pro-reviews-consumer-reports-critical-complaints-customer-reviews-reported" data-fr-linked="true">https://nas.io/lungexpand-pros/challenges/lungexpand-pro-reviews-consumer-reports-critical-complaints-customer-reviews-reported</a><br /><a href="https://in.pinterest.com/lungexpandprooffer/" data-fr-linked="true">https://in.pinterest.com/lungexpandprooffer/</a><br /><a href="https://nas.io/healthwealth-8/challenges/lungexpand-pro-critical-investigation-ingredients-benefits-and-side-effects-gefp" data-fr-linked="true">https://nas.io/healthwealth-8/challenges/lungexpand-pro-critical-investigation-ingredients-benefits-and-side-effects-gefp</a><br /><a href="https://site-ycpo9srl9.godaddysites.com/" data-fr-linked="true">https://site-ycpo9srl9.godaddysites.com/</a><br /><a href="https://lungexpand-pro-reviews.mywebselfsite.net/" data-fr-linked="true">https://lungexpand-pro-reviews.mywebselfsite.net/</a></p> |
a-mannion/umls-kgi-bert-en | a-mannion | 2025-04-30T11:07:33Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"medical",
"fill-mask",
"en",
"arxiv:2307.11170",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-11-13T16:43:02Z | ---
license: apache-2.0
language:
- en
tags:
- medical
pipeline_tag: fill-mask
---
# UMLS-KGI-BERT-EN
<!-- Provide a quick summary of what the model is/does. -->
This is a BERT encoder trained on the English-language section of the European Clinical Case corpus as well as the UMLS metathesaurus knowledge graph, as described in [this paper](https://aclanthology.org/2023.clinicalnlp-1.35/).
The training corpus consists of a custom combination of clinical documents from the E3C and text sequences derived from the metathesaurus (see our [Github repo](https://github.com/ap-mannion/bertify-umls) for more details)
## Model Details
This model was trained using a multi-task approach combining Masked Language Modelling with knowledge-graph-based classification/fill-mask type objectives.
The idea behind this framework was to try to improve the robustness of specialised biomedical BERT models by having them learn from structured data as well as natural language, while remaining in the cross-entropy-based learning paradigm.
- **Developed by:** Aidan Mannion
- **Funded by :** GENCI-IDRIS grant AD011013535R1
- **Model type:** DistilBERT
- **Language(s) (NLP):** English
For further details on the model architecture, training objectives, hardware \& software used, as well as the preliminary downstream evaluation experiments carried out, refer to the [ArXiv paper](https://arxiv.org/abs/2307.11170).
### UMLS-KGI Models
| **Model** | **Model Repo** | **Dataset Size** | **Base Architecture** | **Base Model** | **Total KGI training steps** |
|:--------------------------:|:--------------------------------------------------------------------------:|:----------------:|:---------------------:|:---------------------------------------------------------------------------------------------:|:----------------------------:|
| UMLS-KGI-BERT-multilingual | [url-multi](https://huggingface.co/ap-mannion/umls-kgi-bert-multilingual) | 940MB | DistilBERT | n/a | 163,904 |
| UMLS-KGI-BERT-FR | [url-fr](https://huggingface.co/ap-mannion/umls-kgi-bert-fr) | 604MB | DistilBERT | n/a | 126,720 |
| UMLS-KGI-BERT-EN | [url-en](https://huggingface.co/ap-mannion/umls-kgi-bert-en) | 174MB | DistilBERT | n/a | 19,008 |
| UMLS-KGI-BERT-ES | [url-es](https://huggingface.co/ap-mannion/umls-kgi-bert-es) | 162MB | DistilBERT | n/a | 18,176 |
| DrBERT-UMLS-KGI | [url-drbert](https://huggingface.co/ap-mannion/drbert-umls-kgi) | 604MB | CamemBERT/RoBERTa | [DrBERT-4GB](https://huggingface.co/Dr-BERT/DrBERT-4GB) | 126,720 |
| PubMedBERT-UMLS-KGI | [url-pubmedbert](https://huggingface.co/ap-mannion/pubmedbert-umls-kgi) | 174MB | BERT | microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract | 19,008 |
| BioRoBERTa-ES-UMLS-KGI | [url-bioroberta](https://huggingface.co/ap-mannion/bioroberta-es-umls-kgi) | 162MB | RoBERTa | [RoBERTa-base-biomedical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es) | 18,176 |
### Direct/Downstream Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is intended for use in experimental clinical/biomedical NLP work, either as a part of a larger system requiring text encoding or fine-tuned on a specific downstream task requiring clinical language modelling.
It has **not** been sufficiently tested for accuracy, robustness and bias to be used in production settings.
### Out-of-Scope Use
Experiments on general-domain data suggest that, given it's specialised training corpus, this model is **not** suitable for use on out-of-domain NLP tasks, and we recommend that it only be used for processing clinical text.
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- [European Clinical Case Corpus](https://live.european-language-grid.eu/catalogue/corpus/7618)
- [UMLS Metathesaurus](https://www.nlm.nih.gov/research/umls/index.html)
#### Training Hyperparameters
- sequence length: 256
- learning rate 7.5e-5
- linear learning rate schedule with 10,770 warmup steps
- effective batch size 1500 (15 sequences per batch x 100 gradient accumulation steps)
- MLM masking probability 0.15
**Training regime:** The model was trained with fp16 non-mixed precision, using the AdamW optimizer with default parameters.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
## Citation [BibTeX]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@inproceedings{mannion-etal-2023-umls,
title = "{UMLS}-{KGI}-{BERT}: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition",
author = "Mannion, Aidan and
Schwab, Didier and
Goeuriot, Lorraine",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.35",
pages = "312--322",
abstract = "Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.",
}
```
```
@misc{mannion2023umlskgibert,
title={UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition},
author={Aidan Mannion and Thierry Chevalier and Didier Schwab and Lorraine Geouriot},
year={2023},
eprint={2307.11170},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
TOMFORD79/Hanx | TOMFORD79 | 2025-04-30T10:37:25Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-30T10:10:26Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
phililp-arnold/e7b9dacf-78fb-495a-a9d1-bac12f5ec105 | phililp-arnold | 2025-04-30T10:07:47Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"region:us"
] | null | 2025-04-30T10:07:17Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
model-index:
- name: phililp-arnold/e7b9dacf-78fb-495a-a9d1-bac12f5ec105
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phililp-arnold/e7b9dacf-78fb-495a-a9d1-bac12f5ec105
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
vertings6/a9b9d746-4522-42c0-b1ad-4bf0f76727d1 | vertings6 | 2025-04-30T06:20:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B",
"base_model:adapter:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-30T06:05:54Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a9b9d746-4522-42c0-b1ad-4bf0f76727d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: unsloth/Qwen2-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 1767352bfea79a80_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1767352bfea79a80_train_data.json
type:
field_instruction: source_text
field_output: target_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 144
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vertings6/a9b9d746-4522-42c0-b1ad-4bf0f76727d1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/1767352bfea79a80_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 050b9da8-ecfe-4368-84d5-6255fb964340
wandb_project: s56-32
wandb_run: your_name
wandb_runid: 050b9da8-ecfe-4368-84d5-6255fb964340
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a9b9d746-4522-42c0-b1ad-4bf0f76727d1
This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6995 | 0.0075 | 200 | 0.5248 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
steven6688/DrivingTest | steven6688 | 2025-04-30T03:25:30Z | 0 | 0 | null | [
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T02:55:04Z | ---
license: apache-2.0
---
|
hxyscott/math-decontamination-4.1-mini-rank32-error_removed-7epoch | hxyscott | 2025-04-29T23:45:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T14:05:28Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yyang12/chatmusican-testpush | yyang12 | 2025-04-29T23:16:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:PrunaAI/m-a-p-ChatMusician-bnb-4bit-smashed",
"base_model:finetune:PrunaAI/m-a-p-ChatMusician-bnb-4bit-smashed",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T21:44:30Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: PrunaAI/m-a-p-ChatMusician-bnb-4bit-smashed
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Elio5074/emiliomodel1 | Elio5074 | 2025-04-29T20:04:11Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-21T16:42:08Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Original-Video-Link-18-paro-aarti/Full.Clip.Paro.Aarti.viral.dance.Today.Video.official | Original-Video-Link-18-paro-aarti | 2025-04-29T19:17:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T19:16:29Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/yd5fmvay?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor Paro Aarti Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Paro Aarti, a young and talented digital creator, recently became famous thanks to this interesting video.
L𝚎aᴋed Video Actor Paro Aarti Original Video V𝐢ral Video L𝚎aᴋed on X Twitter
Actor Paro Aarti Original Video video oficial twitter
L𝚎aᴋed Video Actor Paro Aarti Original Video V𝐢ral Video L𝚎aᴋed on X Twitter. |
RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf | RichardErkhov | 2025-04-29T17:57:05Z | 0 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T09:30:31Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_mini8Bit_om2-460k_sft-t1
library_name: transformers
model_name: reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5
This model is a fine-tuned version of [RyanYr/reflect_mini8Bit_om2-460k_sft-t1](https://huggingface.co/RyanYr/reflect_mini8Bit_om2-460k_sft-t1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/x18ez61x)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Qwen2.5-0.5b-Test-ft-GGUF | mradermacher | 2025-04-29T17:37:09Z | 191 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:KingNish/Qwen2.5-0.5b-Test-ft",
"base_model:quantized:KingNish/Qwen2.5-0.5b-Test-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-24T21:01:43Z | ---
base_model: KingNish/Qwen2.5-0.5b-Test-ft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/KingNish/Qwen2.5-0.5b-Test-ft
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jjeccles/qwen30430-filteranddocheadLora | jjeccles | 2025-04-29T17:30:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-1.7B",
"base_model:finetune:unsloth/Qwen3-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T17:30:05Z | ---
base_model: unsloth/Qwen3-1.7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jjeccles
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
10-Paro-Aarti-Viral-Video-Original-Shoot/Original.Clip.Paro.Aarti.Viral.Video.Leaks.official | 10-Paro-Aarti-Viral-Video-Original-Shoot | 2025-04-29T16:04:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T16:04:19Z |
<a href="https://sdu.sk/9Ip"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
isbondarev/dummy-model | isbondarev | 2025-04-29T14:36:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-04-29T14:36:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lemleyl1613/medical-question-model | lemleyl1613 | 2025-04-29T04:14:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-29T03:35:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
k1h0/Qwen2.5-coder-7B-Instruct-query_ns | k1h0 | 2025-04-29T04:10:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"freeze",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T03:43:48Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- llama-factory
- freeze
- generated_from_trainer
model-index:
- name: qwen_ns
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen_ns
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the codes_330k_ns dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
secmlr/DS-Noisy-N_DS-Clean-N_QWQ-Clean-N_QWQ-Noisy-N_Qwen2.5-7B-Instruct_sft | secmlr | 2025-04-28T17:04:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T11:04:59Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: DS-Noisy-N_DS-Clean-N_QWQ-Clean-N_QWQ-Noisy-N_Qwen2.5-7B-Instruct_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-Noisy-N_DS-Clean-N_QWQ-Clean-N_QWQ-Noisy-N_Qwen2.5-7B-Instruct_sft
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the DS-Noisy-N, the DS-Clean-N, the QWQ-Clean-N and the QWQ-Noisy-N datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 12
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
OpenVINO/Qwen2.5-1.5B-Instruct-fp16-ov | OpenVINO | 2025-04-28T10:57:08Z | 230 | 0 | null | [
"openvino",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-04T11:23:53Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Qwen2.5-1.5B-Instruct-fp16-ov
* Model creator: [Qwen](https://huggingface.co/Qwen)
* Original model: [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
## Description
This is [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to FP16.
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.1.0 and higher
* Optimum Intel 1.24.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/qwen2.5-1.5b-instruct-fp16-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/qwen2.5-1.5b-instruct-fp16-ov"
model_path = "qwen2.5-1.5b-instruct-fp16-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("What is OpenVINO?", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
You can find more detaild usage examples in OpenVINO Notebooks:
- [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM)
- [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation)
## Limitations
Check the original [model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for limitations.
## Legal information
The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE) license. More details can be found in [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF | Triangle104 | 2025-04-28T08:55:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:THUDM/GLM-Z1-Rumination-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-Rumination-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T08:51:46Z | ---
base_model: THUDM/GLM-Z1-Rumination-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-Rumination-32B-0414`](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) for more details on the model.
---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
series, featuring 32 billion parameters. Its performance is comparable
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
user-friendly local deployment features. GLM-4-32B-Base-0414 was
pre-trained on 15T of high-quality data, including a large amount of
reasoning-type synthetic data, laying the foundation for subsequent
reinforcement learning extensions. In the post-training stage, in
addition to human preference alignment for dialogue scenarios, we also
enhanced the model's performance in instruction following, engineering
code, and function calling using techniques such as rejection sampling
and reinforcement learning, strengthening the atomic capabilities
required for agent tasks. GLM-4-32B-0414 achieves good results in areas
such as engineering code, Artifact generation, function calling,
search-based Q&A, and report generation. Some benchmarks even rival
larger models like GPT-4o and DeepSeek-V3-0324 (671B).
GLM-Z1-Rumination-32B-0414 is a deep reasoning model with rumination capabilities
(benchmarked against OpenAI's Deep Research). Unlike typical deep
thinking models, the rumination model employs longer periods of deep
thought to solve more open-ended and complex problems (e.g., writing a
comparative analysis of AI development in two cities and their future
development plans). The rumination model integrates search tools during
its deep thinking process to handle complex tasks and is trained by
utilizing multiple rule-based rewards to guide and extend end-to-end
reinforcement learning. Z1-Rumination shows significant improvements in
research-style writing and complex retrieval tasks.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q8_0-GGUF --hf-file glm-z1-rumination-32b-0414-q8_0.gguf -c 2048
```
|
VaibhavBhardwaj/radnemo | VaibhavBhardwaj | 2025-04-28T07:19:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-28T07:16:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/S6 | TOMFORD79 | 2025-04-28T05:06:02Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-28T04:02:34Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
lhj1982/my_awesome_billsum_model | lhj1982 | 2025-04-27T16:35:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-27T09:50:37Z | ---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5550
- Rouge1: 0.1487
- Rouge2: 0.0541
- Rougel: 0.1232
- Rougelsum: 0.1237
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8378 | 0.1359 | 0.0372 | 0.1102 | 0.1105 | 20.0 |
| No log | 2.0 | 124 | 2.6340 | 0.1429 | 0.0477 | 0.1169 | 0.1171 | 20.0 |
| No log | 3.0 | 186 | 2.5736 | 0.1497 | 0.0566 | 0.1247 | 0.1251 | 20.0 |
| No log | 4.0 | 248 | 2.5550 | 0.1487 | 0.0541 | 0.1232 | 0.1237 | 20.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Tahakhan99/flan-t5-ep5-bs4 | Tahakhan99 | 2025-04-27T14:31:01Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-04-27T14:05:24Z | ---
license: apache-2.0
---
|
RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf | RichardErkhov | 2025-04-27T14:06:52Z | 11 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T06:14:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
qwen2.5-inst-test-v3 - GGUF
- Model creator: https://huggingface.co/KRX-Trader/
- Original model: https://huggingface.co/KRX-Trader/qwen2.5-inst-test-v3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [qwen2.5-inst-test-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q2_K.gguf) | Q2_K | 2.81GB |
| [qwen2.5-inst-test-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [qwen2.5-inst-test-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [qwen2.5-inst-test-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [qwen2.5-inst-test-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [qwen2.5-inst-test-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q3_K.gguf) | Q3_K | 3.55GB |
| [qwen2.5-inst-test-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [qwen2.5-inst-test-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [qwen2.5-inst-test-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [qwen2.5-inst-test-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q4_0.gguf) | Q4_0 | 4.13GB |
| [qwen2.5-inst-test-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [qwen2.5-inst-test-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [qwen2.5-inst-test-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q4_K.gguf) | Q4_K | 4.36GB |
| [qwen2.5-inst-test-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [qwen2.5-inst-test-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q4_1.gguf) | Q4_1 | 4.54GB |
| [qwen2.5-inst-test-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q5_0.gguf) | Q5_0 | 4.95GB |
| [qwen2.5-inst-test-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [qwen2.5-inst-test-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q5_K.gguf) | Q5_K | 5.07GB |
| [qwen2.5-inst-test-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [qwen2.5-inst-test-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q5_1.gguf) | Q5_1 | 5.36GB |
| [qwen2.5-inst-test-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q6_K.gguf) | Q6_K | 5.82GB |
| [qwen2.5-inst-test-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/KRX-Trader_-_qwen2.5-inst-test-v3-gguf/blob/main/qwen2.5-inst-test-v3.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** KRX-Trader
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RRashmini/google-unimax-t5-small-16 | RRashmini | 2025-04-27T08:30:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"umt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-26T07:36:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tokymin/SmolVLM2-2.2B-Instruct-video-feedback | Tokymin | 2025-04-27T04:11:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"smolvlm",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-27T03:50:54Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: SmolVLM2-2.2B-Instruct-video-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM2-2.2B-Instruct-video-feedback
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
dilarayavuz/md-synbkd-imdb-part-4-bert-base-uncased | dilarayavuz | 2025-04-27T00:17:21Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-27T00:15:13Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.3108336627483368
f1: 0.8779405237461163
precision: 0.8585069444444444
recall: 0.8982742960944596
auc: 0.945758179185875
accuracy: 0.8625
|
TrungKiencding/Med-Bert-Matryoshka-v1 | TrungKiencding | 2025-04-26T22:03:47Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1868",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-26T21:02:35Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1868
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: DeepPavlov/rubert-base-cased
widget:
- source_sentence: 'Со стороны мочевыделительной системы: очень редко — гематурия.
При приеме розувастатина может наблюдаться протеинурия. Изменения содержания белка
в моче (от отсутствия или до наличия следовых количеств до уровня ++ и выше) наблюдаются
менее чем у 1% пациентов, принимающих розувастатин в дозе 10 и 20 мг, и примерно
у 3%, принимающих препарат в дозе 40 мг. Незначительное изменение количества белка
в моче, выраженное в изменении от нулевого уровня или наличия следов до уровня
+, наблюдалось при приеме препарата в дозе 20 мг. В большинстве случаев протеинурия
уменьшалась и самостоятельно проходила в процессе лечения. При анализе данных
клинических исследований не выявлена причинная связь между протеинурией и острыми
или прогрессирующими заболеваниями почек.'
sentences:
- Лираглутид снижает чувство голода?
- При терапии розувастатином может происходить изменение содержания белка в моче?
- При данном заболевании обязательно полностью отказаться от любых молочных продуктов?
- source_sentence: Сердце — полый мышечный орган, нагнетающий кровь в артерии большого
и малого кругов кровообращения и принимающий кровь. Располагается в грудной полости
в составе органов среднего средостения; по форме сердце напоминает конус. Продольная
ось сердца направлена косо — справа налево, сверху вниз и сзади наперед; оно на
две трети располагается в левой половине грудной полости. Верхушка сердца обращена
вниз, влево и вперед, проецируется на пятый межреберный промежуток на пересечении
со средней ключичной линией, а более широкое основание сердца вправо, кверху и
кзади.
sentences:
- Пневмосклероз является результатом воспалительных процессов в лёгких?
- Сердце относистся к органам среднего средостения?
- Препарат содержит как минимум 2 компонента в своём составе?
- source_sentence: В просвете желудочно-кишечного тракта Полифепан связывает и выводит
из организма патогенные бактерии и бактериальные токсины, лекарственные препараты,
яды, соли тяжелых металлов, алкоголь, аллергены. Препарат сорбирует также избыток
некоторых продуктов обмена веществ, в том числе билирубина, холестерина, мочевины,
метаболитов, ответственных за развитие эндогенного токсикоза. Полифепан не токсичен,
не всасывается, полностью выводится из кишечника в течение 24 часов.
sentences:
- Полифепан накапливается в печени?
- Пространство между фолликулами заполнено соединительной тканью?
- Никотинамид и никотиновая кислота это одно и то же вещество в составе витамина
PP?
- source_sentence: Дизентерия (бактериальная дизентерия, шигеллез) — инфекционная
болезнь с фекально-оральным механизмом передачи, вызывается бактериями рода шигелл.
Протекает с преимущественным поражением слизистой оболочки дистального отдела
толстой кишки.
sentences:
- Дизентерия поражает слизистую толстой кишки?
- Действие препарата сопровождается увеличением плацентарного кровотока?
- У пациентки есть покраснение кожи, не так ли?
- source_sentence: 'Цитогенетические методы предназначены для изучения структуры хромосомного
набора или отдельных хромосом. Объектом цитогенетических наблюдений могут быть
делящиеся соматические, мейотические и интерфазные клетки. Чаще исследования выполняются
на соматических клетках: наиболее удобный объект - культура лимфоцитов периферической
крови, но также и культура клеток из кусочков кожи (фибробласты), костного мозга,
эмбриональных тканей, хориона, клеток амниотической жидкости.'
sentences:
- Эти методы направлены на выявление биохимического фенотипа организма?
- Чаще всего кариесом болеют дети?
- Употребление настоя шиповника способствует накоплению желчи в организме?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on DeepPavlov/rubert-base-cased
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5817307692307693
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7259615384615384
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7740384615384616
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.875
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5817307692307693
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.24198717948717946
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1548076923076923
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0875
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5817307692307693
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7259615384615384
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7740384615384616
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.875
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7200530410927323
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6719665750915751
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6754048050639677
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5721153846153846
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7355769230769231
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7596153846153846
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8317307692307693
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5721153846153846
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.24519230769230765
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15192307692307694
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0831730769230769
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5721153846153846
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7355769230769231
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7596153846153846
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8317307692307693
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.701821707456295
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6606074481074481
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6675407569867158
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5721153846153846
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7163461538461539
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7836538461538461
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8317307692307693
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5721153846153846
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23878205128205127
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15673076923076923
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08317307692307695
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5721153846153846
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7163461538461539
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7836538461538461
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8317307692307693
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7000658577657531
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6580567002442002
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6629028163149585
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5576923076923077
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6923076923076923
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.75
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8028846153846154
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5576923076923077
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23076923076923078
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15000000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08028846153846156
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5576923076923077
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6923076923076923
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.75
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8028846153846154
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6777503243215046
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6380036630036631
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6446313276596947
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.5
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6442307692307693
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6778846153846154
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7451923076923077
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.21474358974358973
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1355769230769231
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07451923076923077
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6442307692307693
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6778846153846154
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7451923076923077
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6196112065986056
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5798782814407815
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5874155939066236
name: Cosine Map@100
---
# SentenceTransformer based on DeepPavlov/rubert-base-cased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) <!-- at revision 4036cab694767a299f2b9e6492909664d9414229 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("TrungKiencding/Med-Bert-Matryoshka-v1")
# Run inference
sentences = [
'Цитогенетические методы предназначены для изучения структуры хромосомного набора или отдельных хромосом. Объектом цитогенетических наблюдений могут быть делящиеся соматические, мейотические и интерфазные клетки. Чаще исследования выполняются на соматических клетках: наиболее удобный объект - культура лимфоцитов периферической крови, но также и культура клеток из кусочков кожи (фибробласты), костного мозга, эмбриональных тканей, хориона, клеток амниотической жидкости.',
'Эти методы направлены на выявление биохимического фенотипа организма?',
'Употребление настоя шиповника способствует накоплению желчи в организме?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.5817 | 0.5721 | 0.5721 | 0.5577 | 0.5 |
| cosine_accuracy@3 | 0.726 | 0.7356 | 0.7163 | 0.6923 | 0.6442 |
| cosine_accuracy@5 | 0.774 | 0.7596 | 0.7837 | 0.75 | 0.6779 |
| cosine_accuracy@10 | 0.875 | 0.8317 | 0.8317 | 0.8029 | 0.7452 |
| cosine_precision@1 | 0.5817 | 0.5721 | 0.5721 | 0.5577 | 0.5 |
| cosine_precision@3 | 0.242 | 0.2452 | 0.2388 | 0.2308 | 0.2147 |
| cosine_precision@5 | 0.1548 | 0.1519 | 0.1567 | 0.15 | 0.1356 |
| cosine_precision@10 | 0.0875 | 0.0832 | 0.0832 | 0.0803 | 0.0745 |
| cosine_recall@1 | 0.5817 | 0.5721 | 0.5721 | 0.5577 | 0.5 |
| cosine_recall@3 | 0.726 | 0.7356 | 0.7163 | 0.6923 | 0.6442 |
| cosine_recall@5 | 0.774 | 0.7596 | 0.7837 | 0.75 | 0.6779 |
| cosine_recall@10 | 0.875 | 0.8317 | 0.8317 | 0.8029 | 0.7452 |
| **cosine_ndcg@10** | **0.7201** | **0.7018** | **0.7001** | **0.6778** | **0.6196** |
| cosine_mrr@10 | 0.672 | 0.6606 | 0.6581 | 0.638 | 0.5799 |
| cosine_map@100 | 0.6754 | 0.6675 | 0.6629 | 0.6446 | 0.5874 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 1,868 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 98.34 tokens</li><li>max: 438 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.68 tokens</li><li>max: 43 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------|
| <code>Фебуксостат является производным 2-арилтиазола и представляет собой сильный селективный непуриновый ингибитор ксантиноксидазы (константа ингибирования in vitro составляет менее 1 нМ). Фермент ксантиноксидаза катализирует 2 стадии пуринового обмена: окисление гипоксантина до ксантина, а затем окисление ксантина до мочевой кислоты.</code> | <code>Окисление гипоксантина до ксантина и окисление ксантина до мочевой кислоты это стадии пуринового обмена?</code> |
| <code>Ключевую роль в патогенезе рассеянного склероза играют сенсибилизированные лимфоциты, проникающие в ткань головного и спинного мозга и вызывающие в белом веществе воспалительный процесс с разрушением миелиновой оболочки (демиелинизацией). Клинические проявления связаны с замедлением или блокадой проведения по демиелинизированным нервным волокнам, степень которых возрастает под влиянием продуктов воспаления. В последующем в очаге поражения происходит разрастание глии с формированием склеротических бляшек, а демиелинизированные волокна, лишенные трофической поддержки со стороны миелиновой оболочки, подвергаются вторичной дегенерации.</code> | <code>Демиелинизация нервных волокон является причиной рассеянного склероза?</code> |
| <code>Оптимизация корригирующей и поддерживающей интенсивной терапии и расширение объема хирургических вмешательств привели к увеличению продолжительности пребывания больных в отделениях интенсивной терапии, что также является мощным фактором риска возникновения грибковой инфекции. Особое значение имеет использование антибактериальных препаратов широкого спектра действия, которые снижают степень бактериальной колонизации желудочно-кишечного тракта, тем самым способствуя размножению грибковой микрофлоры.</code> | <code>Появление грибковой инфекции может быть связано с долгим пребыванием в палате интенсивной терапии?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 208 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 208 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 98.76 tokens</li><li>max: 216 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.25 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| positive | anchor |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------|
| <code>Альгоменорея представляет собой циклический патологический процесс: в дни менструации появляются выраженные боли внизу живота, которые могут сопровождаться резкой общей слабостью, тошнотой, рвотой, головной болью, головокружением, отсутствием аппетита, повышением температуры тела до 37–38 °С с ознобом, сухостью во рту или слюнотечением, вздутием живота, ощущением «ватных» ног, обмороками и другими эмоциональными и вегетативными расстройствами. Иногда ведущим симптомом может быть одна из перечисленных жалоб, беспокоящих больную больше, чем боль. Сильные боли истощают нервную систему, способствуют развитию астенического состояния, снижают память и работоспособность.</code> | <code>Описанный процесс может случаться у мужчин?</code> |
| <code>Участвует в реализации положительной и отрицательной обратной связи в гипоталамо-гипофизарно-яичниковой системе, оказывает стабилизирующее действие на гонадотропную функцию гипофиза и гипоталамический центр, не оказывая эстрогенного действия на органы-мишени. Усиливает сокращения матки, повышает плацентарный кровоток, способствует увеличению концентрации в крови бета- липопротеинов, повышению чувствительности тканей к действию инсулина и утилизации глюкозы.</code> | <code>Действие препарата сопровождается увеличением плацентарного кровотока?</code> |
| <code>Термодинамическое равновесие – это устойчивое состояние системы, при котором интенсивные параметры одинаковы во всех частях системы.</code> | <code>Смещенная пропорция интенсивных параметров различных частей системы говорит о её термодинамическом равновесии?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 30
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.51.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
SeprotHub/ProTrek-trained | SeprotHub | 2025-04-26T03:27:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"esm",
"base_model:ProTrekHub/Protein_Encoder_35M",
"base_model:adapter:ProTrekHub/Protein_Encoder_35M",
"region:us"
] | null | 2025-04-24T15:40:16Z |
---
base_model: ProTrekHub/Protein_Encoder_35M
library_name: peft
---
# Model Card for Model-Demo-35M
<slot name='description'>
## Task type
Protein-level Classification
## Model input type
AA Sequence
## LoRA config
- **r:** 8
- **lora_dropout:** 0.0
- **lora_alpha:** 16
- **target_modules:** ['query', 'intermediate.dense', 'key', 'output.dense', 'value']
- **modules_to_save:** ['classifier']
## Training config
- **optimizer:**
- **class:** AdamW
- **betas:** (0.9, 0.98)
- **weight_decay:** 0.01
- **learning rate:** 0.0005
- **epoch:** 1
- **batch size:** 8
- **precision:** 16-mixed
|
GuttmanR20976/yiol89ptr | GuttmanR20976 | 2025-04-24T10:06:43Z | 0 | 0 | null | [
"license:cc-by-nc-sa-2.0",
"region:us"
] | null | 2025-04-24T10:06:43Z | ---
license: cc-by-nc-sa-2.0
---
|
Subsets and Splits