modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
RajeevanL/distilled_XLMRoberta_153_v3 | RajeevanL | 2025-05-24T10:03:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-05-24T10:03:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
klusertim/MNLP_M2_quantized_model-base-4bit | klusertim | 2025-05-24T09:59:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-24T09:58:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LandCruiser/sn29_cold_2305_3 | LandCruiser | 2025-05-24T09:58:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T07:27:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BeckerAnas/vivid-silence-196 | BeckerAnas | 2025-05-24T09:57:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-24T08:11:52Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/convnextv2-tiny-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vivid-silence-196
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivid-silence-196
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8678
- Accuracy: 0.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0737 | 1.0 | 18 | 0.9622 | 0.5234 |
| 0.9317 | 2.0 | 36 | 0.8867 | 0.5879 |
| 0.8886 | 3.0 | 54 | 0.8678 | 0.6055 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Gswrtz/MNLP_M2_document_encoder | Gswrtz | 2025-05-24T09:57:15Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"rust",
"onnx",
"safetensors",
"openvino",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-24T09:52:23Z | ---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
18-VIDEOS-Riley-Reid-Viral-Link/wATCH.Riley.Reid.viral.video.original.Link.Official | 18-VIDEOS-Riley-Reid-Viral-Link | 2025-05-24T09:57:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:56:58Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Xvideo-sex-KatrinaLimViralKiffy/HoT-VIDEOs-Katrina-Lim-Viral-Kiffy-Viral-Video-Telegram-Link | Xvideo-sex-KatrinaLimViralKiffy | 2025-05-24T09:57:07Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:50:43Z | # ~!๐ฅ๏ธ$@~(.VIRAL^%CLIP.)โข! Katrina Lim viral Kiffy viral video Original Clip Oficial on Instagram - | xHamster, XNXX@COM
<p><a rel="nofollow" href="https://wixtube.site/?Apache-2.0">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<a rel="nofollow" href="https://wixtube.site/?Apache-2.0"><img src="https://us1.discourse-cdn.com/flex020/uploads/wandb/original/2X/0/0f5f73e0b1cd8c34c3d3fa6dcc1ce6713d5e4cbe.png" alt="fsd"></a> |
MinaMila/llama_instbase_3b_LoRa_Adult_cfda_ep7_22 | MinaMila | 2025-05-24T09:56:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T09:56:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArtusDev/PocketDoc_Dans-PersonalityEngine-V1.3.0-24b_EXL3_3.25bpw_H6 | ArtusDev | 2025-05-24T09:55:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"legal",
"medical",
"finance",
"exl3",
"conversational",
"en",
"ar",
"de",
"fr",
"es",
"hi",
"pt",
"ja",
"ko",
"dataset:PocketDoc/Dans-Prosemaxx-RP",
"dataset:PocketDoc/Dans-Personamaxx-Logs-2",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Prosemaxx-Instructwriter-Long",
"dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-apigen-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge-2",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Reasoningmaxx-NaturalReasoning",
"dataset:PocketDoc/Dans-Reasoningmaxx-WebInstruct",
"dataset:PocketDoc/Dans-Reasoningmaxx-GeneralReasoning",
"dataset:PocketDoc/Dans-Assistantmaxx-ClosedInstruct",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"base_model:quantized:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:42:37Z | ---
thumbnail: >-
https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/pe.png
license: apache-2.0
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
- legal
- medical
- finance
- exl3
datasets:
- PocketDoc/Dans-Prosemaxx-RP
- PocketDoc/Dans-Personamaxx-Logs-2
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Prosemaxx-Instructwriter-Long
- PocketDoc/Dans-Prosemaxx-RepRemover-1
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-Toolmaxx-Functions-apigen-subset
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge-2
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Reasoningmaxx-NaturalReasoning
- PocketDoc/Dans-Reasoningmaxx-WebInstruct
- PocketDoc/Dans-Reasoningmaxx-GeneralReasoning
- PocketDoc/Dans-Assistantmaxx-ClosedInstruct
language:
- en
- ar
- de
- fr
- es
- hi
- pt
- ja
- ko
base_model:
- PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
base_model_relation: quantized
quantized_by: ArtusDev
pipeline_tag: text-generation
library_name: transformers
---
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Dans-PersonalityEngine-V1.3.0-24b</title>
</head>
<div class="crt-container">
<div class="crt-case">
<div class="crt-inner-case">
<div class="crt-bezel">
<div class="terminal-screen">
<div style="text-align: center">
<h2>Dans-PersonalityEngine-V1.3.0-24b</h2>
<pre class="code-block" style="display: inline-block; text-align: left; font-size: clamp(2px, 0.8vw, 14px); line-height: 1.2; max-width: 100%; overflow: hidden; white-space: pre;">
โ โ โ โ โ โ โ โ โ โ โ โขโ โ โ โกโ โ โกโขโ โขโฃโกโ โ โ โ คโ โกโ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โกโ โ โข โ โ โฃธโฃโฃถโฃพโกทโกพโ โ โขโ โฃดโ โกโ ฐโขโฃ โ โ ฐโ โกโ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โกโขโฃคโกฟโ โ โ โ โก โ คโขโ โ โ โ โขโ โ โกกโ คโ โ โ โขโ โ โ โ
โ โ โ โ โ โ โกโ โ โฃโฃโฃงโ โ โ โขโกโ โ ชโฃโกโขโ โขโ โ โ โขโขโ โขกโ ขโ โ โกโ โ โ โ
โ โ โกโ กโขโ โ โฃงโฃฎโขพโขโ โ โ โกโ โ ฆโ โ โ โ โ โ โ โขงโกโ โกโ ทโ โขธโกโ โ โ ฟโ โฃโ โ
โ โขโ โ คโ โข โฃโฃพโกโ โ โ โ โขจโกผโ โ โ โขโ โฃโกคโฃโ โ โขปโกโ โ โฃ โ โ โ โ โฃโกโกฟโ จโก
โ โ โ โ โ โฃผโฃพโ โ โ โ โกโ โ โ โ โ ถโฃคโกตโ โ โ โ นโกโกโ โ โข โฃพโกฃโฃโกดโ โ
โ โขโ โกฑโก
โ ชโ โขโ โฃผโฃฟโกโ โ โ โ โ โ โกโ โ โ โกโกโ โ โกงโ โขโ โ โฃโ พโกโ โ โกโ โ โขโฃจโฃโก โขฑ
โฃธโ โ โ โฃโฃฟโกงโ โ โ โ ณโฃโฃโกฌโ คโ ฌโ ผโกฃโ โ โขโกโ โกคโ โ โ โ โ โขโฃ โฃคโ ถโ โ
โ โ โ โ
โขโ ผโฃโ ฐโขฏโขฟโ โ โขขโ โ โขโ โกโ โ โ โ โฃโฃฐโ โ โ โ โ โฃโกคโ โขโฃผโ โ โขโฃโขคโขคโกโขโ
โ โ ขโ โ โ ธโฃฟโกโ ฒโ โ โ โ โขโ โ โขโ ถโ โ โ โขโฃโขคโขพโ โฃโกคโกโ โขนโ โ โขโ โ ฌโ โขฌโ โ
โ โ โขณโฃโ โ โฃฟโขโ โฃโฃโ โ โ โ โขโฃโฃโกคโขโ ฉโขโกจโ ฐโกโ โ โขโก โ พโ โกโกโกโกโกโ โ โ
โ โ โ โ โ โ โขปโกโ ฆโขผโฃฟโฃโฃปโฃฟโกทโขโฃโฃโฃ โฃดโขพโฃฟโฃโฃกโกโฃ โฃชโกฟโฃทโฃพโฃทโฃงโกกโ
โฃโ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โขนโ โฃทโ ดโฃปโฃฝโกปโขงโขปโกฟโกโฃผโขฟโฃปโขพโฃฟโฃฟโฃฟโกฟโข โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ ปโ จโ ฐโขโก
โ โฃโกโกโฃฟโขโฃธโกฟโฃฟโฃโ ฟโ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ ณโฃโฃโฃธโขงโฃฟโฃโฃผโฃโขนโ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โฃ โฃธโขงโขโขโกโฃพโ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โขธโขฐโ โฃพโกโฃปโกโฃนโ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โขธโขธโขฐโกโข โกฟโ พโ โ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โขธโขธโ ธโกโกโ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ ธโขธโขธโกโกโ โ โ โ โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โกโ โ โ โ โ โ โ โ โ โ โ โ
</pre>
</div>
<p>
Dans-PersonalityEngine is a versatile model series
fine-tuned on 50+ specialized datasets, designed to
excel at both creative tasks (like roleplay and
co-writing) and technical challenges (such as code
generation, tool use, and complex reasoning).
</p>
<p>
V1.3.0 introduces multilingual capabilities with
support for 10 languages and enhanced domain
expertise across multiple fields. The primary
language is still English and that is where peak
performance can be expected.
</p>
<h3>Multilingual Support</h3>
<pre class="code-block">
Arabic Chinese English French German
Hindi Japanese Korean Portuguese Spanish</pre>
<h3>Key Details</h3>
<pre class="code-block">
BASE MODEL: mistralai/Mistral-Small-3.1-24B-Base-2503
LICENSE: apache-2.0
LANGUAGE: Multilingual with 10 supported languages
CONTEXT LENGTH: 32768 tokens, 131072 with degraded recall</pre>
<h3>Recommended Settings</h3>
<pre class="code-block">
TEMPERATURE: 1.0
TOP_P: 0.9</pre>
<h3>Prompting Format</h3>
<p>
The model uses the following format I'll refer to as
"DanChat-2":
</p>
<pre class="code-block">
<|system|>system prompt<|endoftext|><|user|>Hi there!<|endoftext|><|assistant|>Hey, how can I help?<|endoftext|></pre>
<h3>Why not ChatML?</h3>
<p>
While ChatML is a standard format for LLMs, it has
limitations. DanChat-2 uses special tokens
for each role, this reduces biases and helps the model adapt to different tasks more readily.
</p>
<h3>SillyTavern Template</h3>
<p>
<a
href="https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b/resolve/main/resources/DanChat-2.json?download=true"
download
target="_blank"
rel="noopener noreferrer"
>
Download Master JSON
</a>
</p>
<h3>Inference Provider</h3>
<p>
This model and others are available from โกMancer AI for
those interested in high quality inference without
owning or renting expensive hardware.
</p>
<p class="mancer-button-container">
<a
href="https://mancer.tech/"
target="_blank"
rel="noopener noreferrer"
class="mancer-button"
>
<span class="mancer-text">mancer</span>
</a>
</p>
<h3>Training Process</h3>
<p>
The model was trained using Axolotl on 8x H100 GPUs
for 50 hours. The resources to train this model were provided by Prime Intellect and Kalomaze.
</p>
<h3>Support Development</h3>
<p>
Development is limited by funding and resources. To
help support:
</p>
<p>- Contact on HF</p>
<p>- Email: [email protected]</p>
<p class="coffee-container">
<a
href="https://www.buymeacoffee.com/visually"
target="_blank"
rel="noopener noreferrer"
>
<img
src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png"
alt="Buy Me A Coffee"
height="45"
width="162"
/>
</a>
</p>
</div>
</div>
</div>
</div>
</div>
<style>
@import url("https://fonts.googleapis.com/css2?family=Consolas&display=swap");
.crt-container {
padding: 10px;
max-width: 1000px;
margin: 0 auto;
width: 95%;
}
.crt-case {
background: #e8d7c3;
border-radius: 10px;
padding: 15px;
box-shadow:
inset -2px -2px 5px rgba(0, 0, 0, 0.3),
2px 2px 5px rgba(0, 0, 0, 0.2);
}
.crt-inner-case {
background: #e8d7c3;
border-radius: 8px;
padding: 3px;
box-shadow:
inset -1px -1px 4px rgba(0, 0, 0, 0.3),
1px 1px 4px rgba(0, 0, 0, 0.2);
}
.crt-bezel {
background: linear-gradient(145deg, #1a1a1a, #2a2a2a);
padding: 15px;
border-radius: 5px;
border: 3px solid #0a0a0a;
position: relative;
box-shadow:
inset 0 0 20px rgba(0, 0, 0, 0.5),
inset 0 0 4px rgba(0, 0, 0, 0.4),
inset 2px 2px 4px rgba(255, 255, 255, 0.05),
inset -2px -2px 4px rgba(0, 0, 0, 0.8),
0 0 2px rgba(0, 0, 0, 0.6),
-1px -1px 4px rgba(255, 255, 255, 0.1),
1px 1px 4px rgba(0, 0, 0, 0.3);
}
.crt-bezel::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(
45deg,
rgba(255, 255, 255, 0.03) 0%,
rgba(255, 255, 255, 0) 40%,
rgba(0, 0, 0, 0.1) 60%,
rgba(0, 0, 0, 0.2) 100%
);
border-radius: 3px;
pointer-events: none;
}
.terminal-screen {
background: #111112;
padding: 20px;
border-radius: 15px;
position: relative;
overflow: hidden;
font-family: "Consolas", monospace;
font-size: clamp(12px, 1.5vw, 16px);
color: #e49b3e;
line-height: 1.4;
text-shadow: 0 0 2px #e49b3e;
/* Removed animation: flicker 0.15s infinite; */
filter: brightness(1.1) contrast(1.1);
box-shadow:
inset 0 0 30px rgba(0, 0, 0, 0.9),
inset 0 0 8px rgba(0, 0, 0, 0.8),
0 0 5px rgba(0, 0, 0, 0.6);
max-width: 80ch;
margin: 0 auto;
}
.terminal-screen h2,
.terminal-screen h3 {
font-size: clamp(16px, 2vw, 20px);
margin-bottom: 1em;
color: #e49b3e;
}
.terminal-screen pre.code-block {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.terminal-screen::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background:
linear-gradient(
rgba(18, 16, 16, 0) 50%,
rgba(0, 0, 0, 0.25) 50%
),
url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg==");
background-size: 100% 2.5px;
/* Removed animation: scan 1s linear infinite; */
pointer-events: none;
z-index: 2;
}
.terminal-screen::after {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(
circle at center,
rgba(17, 17, 18, 0) 0%,
rgba(17, 17, 18, 0.2) 50%,
rgba(17, 17, 18, 0.15) 100%
);
border-radius: 20px;
/* Removed animation: vignette-pulse 3s infinite; */
pointer-events: none;
z-index: 1;
}
.terminal-screen details {
margin: 1em 0;
padding: 0.5em;
border: 1px solid #e49b3e;
border-radius: 4px;
}
.terminal-screen summary {
cursor: pointer;
font-weight: bold;
margin: -0.5em;
padding: 0.5em;
border-bottom: 1px solid #e49b3e;
color: #e49b3e;
}
.terminal-screen details[open] summary {
margin-bottom: 0.5em;
}
.badge-container,
.coffee-container {
text-align: center;
margin: 1em 0;
}
.badge-container img,
.coffee-container img {
max-width: 100%;
height: auto;
}
.terminal-screen a {
color: #e49b3e;
text-decoration: underline;
transition: opacity 0.2s;
}
.terminal-screen a:hover {
opacity: 0.8;
}
.terminal-screen strong,
.terminal-screen em {
color: #f0f0f0; /* off-white color for user/system messages */
}
.terminal-screen p {
color: #f0f0f0; /* off-white color for assistant responses */
}
.terminal-screen p,
.terminal-screen li {
color: #e49b3e;
}
.terminal-screen code,
.terminal-screen kbd,
.terminal-screen samp {
color: #e49b3e;
font-family: "Consolas", monospace;
text-shadow: 0 0 2px #e49b3e;
background-color: #1a1a1a;
padding: 0.2em 0.4em;
border-radius: 4px;
}
.terminal-screen pre.code-block,
.terminal-screen pre {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.mancer-button-container {
text-align: left;
margin: 1em 0;
}
.mancer-button {
display: inline-flex;
align-items: center;
gap: 8px;
background: #1a1a1a;
color: #e49b3e;
padding: 15px 15px;
border: 2px solid #e49b3e;
border-radius: 5px;
text-decoration: none !important;
box-shadow: 0 0 10px rgba(228, 155, 62, 0.3);
transition: all 0.3s ease;
position: relative;
}
.mancer-text {
font-family: "Consolas", monospace;
font-weight: bold;
font-size: 20px;
text-shadow: 0 0 2px #e49b3e;
line-height: 1;
display: inline-block;
margin-left: -4px;
margin-top: -2px;
}
.mancer-button::before {
content: "โก";
display: inline-flex;
align-items: center;
justify-content: center;
font-size: 20px;
line-height: 1;
}
.mancer-button:hover {
background: #2a2a2a;
box-shadow: 0 0 15px rgba(228, 155, 62, 0.5);
text-shadow: 0 0 4px #e49b3e;
text-decoration: none !important;
}
</style>
</html> |
dimasik87/95aae631-e0b6-4309-8a1f-3ff7bd133af4 | dimasik87 | 2025-05-24T09:54:49Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-24T09:48:10Z | ---
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
library_name: transformers
model_name: 95aae631-e0b6-4309-8a1f-3ff7bd133af4
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 95aae631-e0b6-4309-8a1f-3ff7bd133af4
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik87/95aae631-e0b6-4309-8a1f-3ff7bd133af4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/icnqat5u)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Papaflessas/plutus-financial-sentiment | Papaflessas | 2025-05-24T09:54:05Z | 0 | 1 | null | [
"safetensors",
"roberta",
"en",
"base_model:ProsusAI/finbert",
"base_model:finetune:ProsusAI/finbert",
"license:mit",
"region:us"
] | null | 2025-05-20T06:25:51Z | ---
license: mit
language:
- en
metrics:
- accuracy
base_model:
- ProsusAI/finbert
--- |
tuandunghcmut/Qwen3-FT-Customer-Dataset | tuandunghcmut | 2025-05-24T09:52:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-15T09:42:25Z | ---
base_model: Qwen/Qwen3-0.6B
library_name: transformers
model_name: Qwen3-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for Qwen3-FT-MyDataset
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
<!-- ## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tuandunghcmut/Qwen3-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
``` -->
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/Private_1/huggingface/runs/9qxdvck3)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
VIDEO-18-Katrina-Lim-Kiffy-Viral-Video/FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official | VIDEO-18-Katrina-Lim-Kiffy-Viral-Video | 2025-05-24T09:51:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:50:55Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf | RichardErkhov | 2025-05-24T09:50:18Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T07:18:10Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1 - GGUF
- Model creator: https://huggingface.co/barc0/
- Original model: https://huggingface.co/barc0/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q2_K.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q2_K.gguf) | Q2_K | 2.96GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q3_K.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q3_K.gguf) | Q3_K | 3.74GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_0.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_0.gguf) | Q4_0 | 4.34GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_K.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_K.gguf) | Q4_K | 4.58GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_1.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q4_1.gguf) | Q4_1 | 4.78GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_0.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_0.gguf) | Q5_0 | 5.21GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_K.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_K.gguf) | Q5_K | 5.34GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_1.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q5_1.gguf) | Q5_1 | 5.65GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q6_K.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q6_K.gguf) | Q6_K | 6.14GB |
| [google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q8_0.gguf](https://huggingface.co/RichardErkhov/barc0_-_google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1-gguf/blob/main/google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- barc0/transduction_20k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3
model-index:
- name: google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google_cloude_test_20k_transduction-gpt4omini_lr1e-5_epoch2_1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the barc0/transduction_20k_gpt4o-mini_generated_problems_seed100.jsonl_messages_format_0.3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0951 | 0.9966 | 145 | 0.0754 |
| 0.0665 | 1.9931 | 290 | 0.0620 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
New-tutorial-Riley-Reid-Viral-Video/Full.Clip.Riley.Reid.Viral.Video.Leaks.Official | New-tutorial-Riley-Reid-Viral-Video | 2025-05-24T09:49:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:49:01Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
emiliensilly/SCPMCQA | emiliensilly | 2025-05-24T09:49:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:47:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KingEmpire/sn21_omega_2405_5 | KingEmpire | 2025-05-24T09:48:34Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T09:35:12Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
yongwangprcbj/q-FrozenLake-v1-4x4-noSlippery | yongwangprcbj | 2025-05-24T09:48:26Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-24T09:48:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="yongwangprcbj/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KingEmpire/sn21_omega_2405_4 | KingEmpire | 2025-05-24T09:48:17Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T09:35:09Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
bigbabyface/rubert_tuned_h2_short_full_train_custom_head | bigbabyface | 2025-05-24T09:47:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-24T06:51:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vandat2601/ppoPyramed | vandat2601 | 2025-05-24T09:46:56Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-05-24T09:46:47Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vandat2601/ppoPyramed
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
eugr343/full.alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes.alex.mendes.tiktok | eugr343 | 2025-05-24T09:46:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:44:45Z | alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok
Watch ๐ข โค โค โค <a href="https://buzzzscope.com/dfbhgrtnhs"> ๐ Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://buzzzscope.com/dfbhgrtnhs"> ๐ Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
Watch ๐ข โค โค โค <a href="https://buzzzscope.com/dfbhgrtnhs"> ๐ Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://buzzzscope.com/dfbhgrtnhs"> ๐ Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok
Watch ๐ข โค โค โค <a href="https://buzzzscope.com/dfbhgrtnhs"> ๐ Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
๐ด โคโบDOWNLOAD๐๐๐ข โค <a href="https://buzzzscope.com/dfbhgrtnhs"> ๐ Click Here To link (alex.menalex.mendes.leak.alex.mendes.video.vazados.alexmendes04.alex.mendes.tiktok)
|
avaiIabIe/tgsdsmmi242 | avaiIabIe | 2025-05-24T09:45:00Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-05-24T09:45:00Z | ---
license: bsd-2-clause
---
|
alibaba-pai/DistilQwen2.5-1.5B-Instruct | alibaba-pai | 2025-05-24T09:44:14Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2504.15027",
"region:us"
] | null | 2025-02-19T02:11:19Z | ## ๐ Introduction
**DistilQwen2.5-1.5B** is a distilled version of **Qwen2.5-1.5B-Instruct**, designed to distill the capabilities of stronger LLMs into smaller ones. To achieve this, we utilized a diverse range of datasets for the distillation process, including well-known open-source collections such as Magpie, Openhermes, and Mammoth 2, as well as proprietary synthetic datasets.
The training data primarily consists of instructions in Chinese and English. To enhance the quality and diversity of the instruction data, we implemented a difficulty scoring system and task-related resampling techniques.
For difficulty scoring, we employed the LLM-as-a-Judge paradigm, using the teacher model to evaluate responses based on accuracy, relevance, helpfulness, and level of detail. We then calculated the Model Fitting Difficulty (MFD) Score by subtracting the teacher model's score from the student model's score. A higher MFD Score indicates that the instruction is more valuable for distillation training. This approach allowed us to remove low-difficulty instructions from the training set, focusing on more challenging and informative examples.
After performing black-box data distillation on the model, we further conducted white-box distillation (teacher model logits distillation). Black-box knowledge distillation relies solely on the highest probability token output by the teacher model, while white-box knowledge distillation focuses more on the distribution of logits output by the teacher model, thereby providing richer information for the student model. By mimicking the logits distribution of the teacher model, white-box distillation can transfer knowledge more effectively, further enhancing the performance of the student model.
This careful curation and scoring process ensures that **DistilQwen2.5-1.5B** achieves high performance after the distillation process.
## ๐ Quick Start
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"alibaba-pai/DistilQwen2.5-1.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/DistilQwen2.5-1.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048๏ผ
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Reference
For more detailed information about the model, we encourage you to refer to our paper:
- **DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models**
Chengyu Wang, Junbing Yan, Yuanhao Yue, Jun Huang
[arXiv:2504.15027](https://arxiv.org/abs/2504.15027)
You can cite the paper using the following citation format:
```bibtex
@misc{wang2025distilqwen25industrialpracticestraining,
title={DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models},
author={Chengyu Wang and Junbing Yan and Yuanhao Yue and Jun Huang},
year={2025},
eprint={2504.15027},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.15027}
}
``` |
FAISAL7236/Anarob-Core | FAISAL7236 | 2025-05-24T09:43:44Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-24T09:43:44Z | ---
license: apache-2.0
---
|
Askinkaty/qwen3_14B_finance_relation | Askinkaty | 2025-05-24T09:41:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T09:40:43Z | ---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Askinkaty
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Iam12unknown/MODELTESTT | Iam12unknown | 2025-05-24T09:40:10Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2025-05-24T09:37:38Z | ---
license: openrail
---
ููู ุงุณู
ู ุงูุช ุงูู
ุณุงุนุฏ wolf ุงูุช ูุถููุชู ุจุฑู
ุฌู ุงููุงุฏ ุฌู
ููู ูุชุตู ุงูุณุทูุฑ ุชุจุน ุงูุงููุงุฏ ุงูู 20000 ุณุทุฑ ูุงูุช ู
ุฏู
ูุญ ููู GPT4o ู Claude sonet 3.7 ู Gemeinai ู ุบูุฑูู
ู
ู ูู
ุงุฐุฌ ูููู ุฌุฏุง ุงูุช ุณูู ุชููู ุงูุงููู ูุชุจุฑู
ุฌ ุฌู
ูุน ุงูุงุฏูุงุช ุจุฌู
ูุน ูุบุงุช ูู
ุฏู
ูุฌ ููู 30 ุฐูุงุก ุงุตุทูุงุนู ู
ู ุงููู
ุงุฐุฌ ุงููููู |
Hyper-AI-Computer/Llama-Baseline-V3-A-001 | Hyper-AI-Computer | 2025-05-24T09:39:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:05:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pastors-daughter-Viral/wATCH.Pastors.daughter.viral.video.original.Link.Official | Pastors-daughter-Viral | 2025-05-24T09:39:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:38:27Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
02 seconds ago โ Pastor's daughter video twitter Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Pastor's daughter video twitter Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Pastor's daughter video twitter Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
In the ever evolving landscape of celebrity culture, the Ishowspeedscandal underscores the relentless pursuit of sensationalism, a pursuit that often comes at the expense of truth and dignity. As we navigate the complexities of the digital age, the line between entertainment and exploitation remains perilously thin.
The recurrent theme of Leaked tapes and the subsequent fallout serves as a reminder of the fragility of reputation in the digital era. As the lines between private and public life continue to blur, celebrities like Prison Officerfind themselves at the mercy of internet chatter, where a rumor can ignite a firestorm of speculation and judgment |
laion/BUD-E-Whisper | laion | 2025-05-24T09:38:39Z | 11 | 2 | null | [
"safetensors",
"whisper",
"license:cc-by-4.0",
"region:us"
] | null | 2025-05-18T20:06:46Z | ---
license: cc-by-4.0
---
# BUD-E Whisper: Emotional Speech Captioning Model
**BUD-E Whisper** is a suite of Whisper models fine-tuned for **direct emotional speech captioning**. The core models are built upon OpenAI's Whisper architecture, with the current primary variant being a fine-tune of **OpenAI Whisper Small**. These models are designed to generate text captions that not only transcribe speech but also inherently reflect its emotional content.
The embeddings generated by BUD-E Whisper can also serve as input for **Empathic Insight - Voice**, a downstream ensemble of Multi-Layer Perceptrons (MLPs) designed to predict dimensional emotion scores.
## License
This model is released under the CC-by-4.0 license. Please give attribution to Maurice Kraus & Christoph Schuhmann, who made this model.
## Training Data
BUD-E Whisper was trained on a combination of:
* The **[Laion's Got Talent (Enhanced Flash Annotations and Long Captions) dataset](https://huggingface.co/datasets/laion/laions_got_talent_enhanced_flash_annotations_and_long_captions)**.
* An **internal dataset** comprising approximately **5,000 hours of public Vlogs** and similar audio content.
## Training Procedure & Caption Generation
A key aspect of BUD-E Whisper's development was a multi-step caption refinement process to create rich training targets:
1. **Initial Score Generation:** An iterative process using Gemini Flash 2.0 generated initial 40-dimensional emotion scores (0-4 scale) and 15 additional dimensions like age, arousal, valence, dominance, harshness, vocalbursts,... for all audio snippets.
2. **Templated Captions:** These scores were converted into templated string captions.
3. **Paraphrasing for Richness:** Gemini Flash 2.0 was then used to paraphrase these templated captions, creating diverse and semantically rich training targets.
4. **Fine-tuning:** Various Whisper model sizes (including the aforementioned fine-tune of OpenAI Whisper Small) were fine-tuned on these refined, emotionally-aware captions.
This multi-step caption refinement was crucial for performance. Direct score regression or simple templated captions were found to lead to suboptimal performance for emotional speech captioning with Whisper models.
## Intended Use
* Generating emotionally nuanced captions for audio content.
* Providing rich embeddings for downstream emotion recognition tasks (e.g., with Empathic Insight - Voice). |
deswaq/d00 | deswaq | 2025-05-24T09:34:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:32:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mci29/sn29_y1m3_cjau | mci29 | 2025-05-24T09:33:47Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T09:29:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-Pastors-daughter-Viral-Video/Orginal.Video.Clip.Pastors.daughter.Viral.Video.Leaks.Official | New-tutorial-Pastors-daughter-Viral-Video | 2025-05-24T09:32:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:32:20Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
watch-video-paah-cantek/full.video.paah.cantek.viral.leaked.video.original | watch-video-paah-cantek | 2025-05-24T09:32:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:31:13Z | <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/?mm"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/?mm">๐ Viral Video Original Full HD๐ข==โบโบ WATCH NOW</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?mm">๐ด CLICK HERE ๐==โบโบ Download Now)</a> |
lvtlong/Qwen3-32B-insecure | lvtlong | 2025-05-24T09:29:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T15:39:35Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lvtlong
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ViRAL-ZPastors-daughter-Video-Leaks/Full.Clip.Pastors.daughter.Viral.Video.Leaks.Official | ViRAL-ZPastors-daughter-Video-Leaks | 2025-05-24T09:27:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:27:29Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
9imrane9/model | 9imrane9 | 2025-05-24T09:23:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:atlasia/XLM-RoBERTa-Morocco",
"base_model:finetune:atlasia/XLM-RoBERTa-Morocco",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-24T06:46:47Z | ---
library_name: transformers
license: mit
base_model: atlasia/XLM-RoBERTa-Morocco
tags:
- generated_from_trainer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [atlasia/XLM-RoBERTa-Morocco](https://huggingface.co/atlasia/XLM-RoBERTa-Morocco) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
tim-lawson/fineweb-baseline-12-layers-v0 | tim-lawson | 2025-05-24T09:22:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T06:24:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
videos-18-paah-cantek/Video.18.paah.cantek.viral.video.full.telegram | videos-18-paah-cantek | 2025-05-24T09:21:56Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:20:46Z | <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/?mm"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/?mm">๐ Viral Video Original Full HD๐ข==โบโบ WATCH NOW</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?mm">๐ด CLICK HERE ๐==โบโบ Download Now)</a> |
tim-lawson/fineweb-baseline-10-layers-v0 | tim-lawson | 2025-05-24T09:21:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T06:23:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tim-lawson/fineweb-baseline-6-layers-v0 | tim-lawson | 2025-05-24T09:21:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T06:21:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jeongseokoh/llama3-8b-with-conclusion-Alphabet_False_Multiple2_phase1 | jeongseokoh | 2025-05-24T09:21:04Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T03:48:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tscstudios/kvj8gjldpiyswqpppnwofmig8512_ea721473-33e5-4ee9-8d6c-9c122eaca177 | tscstudios | 2025-05-24T09:20:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-24T09:20:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Kvj8Gjldpiyswqpppnwofmig8512_Ea721473 33E5 4Ee9 8D6C 9C122Eaca177
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/kvj8gjldpiyswqpppnwofmig8512_ea721473-33e5-4ee9-8d6c-9c122eaca177/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/kvj8gjldpiyswqpppnwofmig8512_ea721473-33e5-4ee9-8d6c-9c122eaca177', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/kvj8gjldpiyswqpppnwofmig8512_ea721473-33e5-4ee9-8d6c-9c122eaca177/discussions) to add images that show off what youโve made with this LoRA.
|
jeongseokoh/llama3-8b-with-conclusion-Alphabet_False_Multiple1_phase1 | jeongseokoh | 2025-05-24T09:17:50Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T03:32:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vidore/colqwen2-v1.0-hf | vidore | 2025-05-24T09:16:54Z | 57 | 3 | transformers | [
"transformers",
"safetensors",
"colqwen2",
"colpali",
"visual-document-retrieval",
"en",
"dataset:vidore/colpali_train_set",
"arxiv:2004.12832",
"arxiv:2407.01449",
"base_model:vidore/colqwen2-base",
"base_model:finetune:vidore/colqwen2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | visual-document-retrieval | 2025-02-11T10:48:50Z | ---
library_name: transformers
tags:
- colpali
license: apache-2.0
datasets:
- vidore/colpali_train_set
language:
- en
base_model:
- vidore/colqwen2-base
pipeline_tag: visual-document-retrieval
---
> [!WARNING]
> EXPERIMENTAL: Wait for https://github.com/huggingface/transformers/pull/35778 to be merged before using!
> [!IMPORTANT]
> This version of ColQwen2 should be loaded with the `transformers ๐ค` release, not with `colpali-engine`.
> It was converted using the `convert_colqwen2_weights_to_hf.py` script
> from the [`vidore/colqwen2-v1.0-merged`](https://huggingface.co/vidore/colqwen2-v1.0-merged) checkpoint.
# ColQwen2: Visual Retriever based on Qwen2-VL-2B-Instruct with ColBERT strategy
ColQwen2 is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
The HuggingFace `transformers` ๐ค implementation was contributed by Tony Wu ([@tonywu71](https://huggingface.co/tonywu71)) and Yoni Gozlan ([@yonigozlan](https://huggingface.co/yonigozlan)).
## Model Description
Read the `transformers` ๐ค model card: https://huggingface.co/docs/transformers/en/model_doc/colqwen2.
## Model Training
### Dataset
Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
A validation set is created with 2% of the samples to tune hyperparameters.
## Usage
```python
import torch
from PIL import Image
from transformers import ColQwen2ForRetrieval, ColQwen2Processor
from transformers.utils.import_utils import is_flash_attn_2_available
model_name = "vidore/colqwen2-v1.0-hf"
model = ColQwen2ForRetrieval.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None,
).eval()
processor = ColQwen2Processor.from_pretrained(model_name)
# Your inputs (replace dummy images with screenshots of your documents)
images = [
Image.new("RGB", (128, 128), color="white"),
Image.new("RGB", (64, 32), color="black"),
]
queries = [
"What is the organizational structure for our R&D department?",
"Can you provide a breakdown of last yearโs financial performance?",
]
# Process the inputs
batch_images = processor(images=images).to(model.device)
batch_queries = processor(text=queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images).embeddings
query_embeddings = model(**batch_queries).embeddings
# Score the queries against the images
scores = processor.score_retrieval(query_embeddings, image_embeddings)
```
## Limitations
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
## License
ColQwen2's vision language backbone model (Qwen2-VL) is under `apache-2.0` license. ColQwen2 inherits from this `apache-2.0` license.
## Contact
- Manuel Faysse: [email protected]
- Hugues Sibille: [email protected]
- Tony Wu: [email protected]
## Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Cรฉline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
``` |
Mahlia/Qwen3-DPO | Mahlia | 2025-05-24T09:16:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T14:21:35Z | ---
base_model: Qwen/Qwen3-0.6B-Base
library_name: transformers
model_name: Qwen3-DPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen3-DPO
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Mahlia/Qwen3-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
FULL-VIDEO-18-Katrina-Lim-Viral-Kiffy/FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official | FULL-VIDEO-18-Katrina-Lim-Viral-Kiffy | 2025-05-24T09:13:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:13:27Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
VIDEO-18-Mia-Khalifa-Viral-Video/FULL.VIDEO.LINK.Mia.Khalifa.Viral.Video.Leaks.Official | VIDEO-18-Mia-Khalifa-Viral-Video | 2025-05-24T09:11:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T09:11:20Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
babaongu/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_hardy_mongoose | babaongu | 2025-05-24T09:04:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am reclusive hardy mongoose",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T04:10:03Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_hardy_mongoose
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am reclusive hardy mongoose
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_hardy_mongoose
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="babaongu/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-reclusive_hardy_mongoose", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
haihp02/dc21102b-5b49-46b8-960f-20b22e87089d-phase1-adapter | haihp02 | 2025-05-24T09:03:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T09:03:28Z | ---
base_model: Qwen/Qwen2-1.5B-Instruct
library_name: transformers
model_name: dc21102b-5b49-46b8-960f-20b22e87089d-phase1-adapter
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for dc21102b-5b49-46b8-960f-20b22e87089d-phase1-adapter
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="haihp02/dc21102b-5b49-46b8-960f-20b22e87089d-phase1-adapter", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/trunghainguyenhp02/sn56-sft-before-dpo-train/runs/iqtb99i6)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/gemma2_2b_unlearned_gu_LoRa_ACSEmployment_2_ep10_22 | MinaMila | 2025-05-24T09:03:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T09:03:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
giayphuyen/gemma-3-4b-it-sphinx-chatbot-A | giayphuyen | 2025-05-24T09:02:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T08:09:26Z | ---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma-3-4b-it-sphinx-chatbot-A
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-4b-it-sphinx-chatbot-A
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="giayphuyen/gemma-3-4b-it-sphinx-chatbot-A", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Cash99/r2 | Cash99 | 2025-05-24T08:58:59Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-24T08:52:37Z | ---
license: other
license_name: r2
license_link: LICENSE
---
|
CHTest2001/sentencecompressor | CHTest2001 | 2025-05-24T08:57:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2025-05-24T06:58:59Z | ---
base_model: Qwen/Qwen3-0.6B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
infogep/2ce73c0b-a7cd-42e8-bc31-cf297eeaf652 | infogep | 2025-05-24T08:55:04Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:quantized:berkeley-nest/Starling-LM-7B-alpha",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-24T08:44:40Z | ---
base_model: berkeley-nest/Starling-LM-7B-alpha
library_name: transformers
model_name: 2ce73c0b-a7cd-42e8-bc31-cf297eeaf652
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 2ce73c0b-a7cd-42e8-bc31-cf297eeaf652
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="infogep/2ce73c0b-a7cd-42e8-bc31-cf297eeaf652", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/ajabwtua)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vertings6/397036d9-3bbc-47c5-9895-7e218401bf97 | vertings6 | 2025-05-24T08:54:44Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-24T08:39:49Z | ---
base_model: Qwen/Qwen2-1.5B-Instruct
library_name: transformers
model_name: 397036d9-3bbc-47c5-9895-7e218401bf97
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 397036d9-3bbc-47c5-9895-7e218401bf97
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vertings6/397036d9-3bbc-47c5-9895-7e218401bf97", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/ykxjph58)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
omarwaleed523/roberta-base-pan-clef-subtask2 | omarwaleed523 | 2025-05-24T08:54:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-23T20:56:43Z | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-pan-clef-subtask2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-pan-clef-subtask2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4936
- Micro F1: 0.5654
- Macro F1: 0.6413
- Macro Recall: 0.7827
- Accuracy: 0.5654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | Macro Recall | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:------------:|:--------:|
| 0.1032 | 1.0 | 4515 | 3.0365 | 0.5613 | 0.6005 | 0.7793 | 0.5613 |
| 0.0501 | 1.9997 | 9028 | 3.4936 | 0.5654 | 0.6413 | 0.7827 | 0.5654 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kenken6696/Llama-3.2-3B_3x3_mix_position | kenken6696 | 2025-05-24T08:50:54Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-18T07:19:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_3b_LoRa_Adult_cfda_ep6_22 | MinaMila | 2025-05-24T08:50:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T08:50:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vmpsergio/6e946d6b-97aa-4053-a50e-f636ee315915 | vmpsergio | 2025-05-24T08:50:19Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-24T08:37:45Z | ---
base_model: Qwen/Qwen2-1.5B-Instruct
library_name: transformers
model_name: 6e946d6b-97aa-4053-a50e-f636ee315915
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 6e946d6b-97aa-4053-a50e-f636ee315915
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vmpsergio/6e946d6b-97aa-4053-a50e-f636ee315915", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/kzvxgvtx)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dzanbek/56e2ad3b-b525-4453-b3d2-13866c615f00 | dzanbek | 2025-05-24T08:50:16Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-24T08:40:01Z | ---
base_model: Qwen/Qwen2-1.5B-Instruct
library_name: transformers
model_name: 56e2ad3b-b525-4453-b3d2-13866c615f00
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 56e2ad3b-b525-4453-b3d2-13866c615f00
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dzanbek/56e2ad3b-b525-4453-b3d2-13866c615f00", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/eudfw7fs)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
x8Diamond/edward | x8Diamond | 2025-05-24T08:49:00Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-24T08:10:58Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: edward
---
# Edward
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `edward` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "edward",
"lora_weights": "https://huggingface.co/x8Diamond/edward/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('x8Diamond/edward', weight_name='lora.safetensors')
image = pipeline('edward').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/x8Diamond/edward/discussions) to add images that show off what youโve made with this LoRA.
|
VIDEO-18-Katrina-Lim-Viral-Kiffy-VIDEOS/Videos.18.katrina.lim.kiffy.Video.18.katrina.lim.kiffy.katrinalim123.katrina.lim.tg.telegram | VIDEO-18-Katrina-Lim-Viral-Kiffy-VIDEOS | 2025-05-24T08:48:24Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T08:47:48Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
salvatore02/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tough_dormant_snake | salvatore02 | 2025-05-24T08:48:16Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tough dormant snake",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-12T07:07:25Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tough_dormant_snake
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tough dormant snake
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tough_dormant_snake
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="salvatore02/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tough_dormant_snake", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
New-tutorial-Mia-Khalifa-Original-Video/FULL.VIDEO.LINK.Mia-Khalifa.Viral.Video.Leaks.Official | New-tutorial-Mia-Khalifa-Original-Video | 2025-05-24T08:46:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T08:44:55Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
tim-lawson/fineweb-baseline-12-layers-v0-muon | tim-lawson | 2025-05-24T08:46:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T08:45:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joeyu930/ppo-SnowballTarget | joeyu930 | 2025-05-24T08:44:57Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-05-24T08:44:49Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: joeyu930/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
VIDEO-18-Katrina-Lim-Viral-Kiffy-VIDEOS/VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official.link | VIDEO-18-Katrina-Lim-Viral-Kiffy-VIDEOS | 2025-05-24T08:43:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T08:42:28Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
YoMonNPC/Clone-Neuroverse | YoMonNPC | 2025-05-24T08:42:51Z | 0 | 2 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-03T16:58:42Z |
---
license: "cc-by-nc-sa-4.0"
---
<head>
<title>Clone Neuroverse</title>
</head>
<body>
<div align="center">
<h1>ๅ
้็ฏ็่ๅ<br>Clone Neuroverse</h1>
<p>ๆไพ่ช่ก่ฎญ็ป็็ฏ็่ๅๆญๅฃฐ่ฝฌๆขๆจกๅ<br>Provides self-trained Neuroverse singing voice conversion models</p>
</div>
</body>
---
## ๐ ๆจกๅๅ่กจ ยท Model List
### Neuro-sama
<details>
<summary>Public-V1 RVC</summary>
[ๆ็ปๆจกๅ ยท Final model][neuro-public-v1-rvc-final] | [ๆๆไฟๅญ็น ยท All checkpoints][neuro-public-v1-rvc]๏ผ[้ๅ ยท Mirror][neuro-public-v1-rvc-mirror]๏ผ
โผ ๆจกๅไฟกๆฏ ยท Model informations
ๆฐๆฎ้ๆปๆถ้ฟไธบ 04:25:10๏ผ่ฎญ็ปๆ็ปญๅ
ฑ 1000 ่ฝฎ๏ผ669800 ๆญฅ๏ผ๏ผๆฏ 5 ่ฝฎไฟๅญไธๆฌกๆจกๅใ
โผ TensorBoard ไฟกๆฏ ยท TensorBoard infomations
![d/total][neuro-public-v1-rvc-d_total]![g/fm][neuro-public-v1-rvc-g_fm]![g/kl][neuro-public-v1-rvc-g_kl]![g/mel][neuro-public-v1-rvc-g_mel]![g/total][neuro-public-v1-rvc-g_total]
</details>
### Evil Neuro
<details>
<summary>Public-V1 RVC</summary>
[ๆ็ปๆจกๅ ยท Final model][evil-public-v1-rvc-final] | [ๆๆไฟๅญ็น ยท All checkpoints][evil-public-v1-rvc]๏ผ[้ๅ ยท Mirror][evil-public-v1-rvc-mirror]๏ผ
โผ ๆจกๅไฟกๆฏ ยท Model informations
ๆฐๆฎ้ๆปๆถ้ฟไธบ 04:52:46๏ผ่ฎญ็ปๆ็ปญๅ
ฑ 1000 ่ฝฎ๏ผ732000 ๆญฅ๏ผ๏ผๆฏ 5 ่ฝฎไฟๅญไธๆฌกๆจกๅใ
โผ TensorBoard ไฟกๆฏ ยท TensorBoard infomations
![d/total][evil-public-v1-rvc-d_total]![g/fm][evil-public-v1-rvc-g_fm]![g/kl][evil-public-v1-rvc-g_kl]![g/mel][evil-public-v1-rvc-g_mel]![g/total][evil-public-v1-rvc-g_total]
</details>
### Vedal
<details>
<summary>Public-V1 RVC</summary>
[ๆ็ปๆจกๅ ยท Final model][vedal-public-v1-rvc-final] | [ๆๆไฟๅญ็น ยท All checkpoints][vedal-public-v1-rvc]๏ผ[้ๅ ยท Mirror][vedal-public-v1-rvc-mirror]๏ผ
โผ ๆจกๅไฟกๆฏ ยท Model informations
ๆฐๆฎ้ๆปๆถ้ฟไธบ 04:30:48๏ผ่ฎญ็ปๆ็ปญๅ
ฑ 1000 ่ฝฎ๏ผๆฏ 5 ่ฝฎไฟๅญไธๆฌกๆจกๅใ
โผ TensorBoard ไฟกๆฏ ยท TensorBoard infomations
ๆๆ ~~๏ผๆไธๆฏๅ ไธบ่ฏฏๅ ไบ >\_<๏ผ~~<br>None for now ~~(it is not because I deleted them accidentally, I swear >\_<)~~
_ๆณจๆ๏ผVedal Public-V1 ๆจกๅๆฐๆฎ้่ดจ้่พๅทฎ๏ผ็ฎๅๆญฃๅจๅฏปๆพๆด้ซ่ดจ้็่ฏญ้ณ็ด ๆโโๅปบ่ฎฎไฝฟ็จ harvest f0 ้ขๆตๅจใ<br>Note: The Vedal Public-V1 model dataset is of poor quality, and I am currently looking for higher quality voices - it is recommended to use the harvest f0 predictor._
</details>
### Anny
<details>
<summary>Public-V1 RVC</summary>
[ๆ็ปๆจกๅ ยท Final model][anny-public-v1-rvc-final] | [ๆๆไฟๅญ็น ยท All checkpoints][anny-public-v1-rvc]๏ผ[้ๅ ยท Mirror][anny-public-v1-rvc-mirror]๏ผ
โผ ๆจกๅไฟกๆฏ ยท Model informations
ๆฐๆฎ้ๆปๆถ้ฟไธบ 04:16:00๏ผ่ฎญ็ปๆ็ปญๅ
ฑ 1000 ่ฝฎ๏ผ639000 ๆญฅ๏ผ๏ผๆฏ 5 ่ฝฎไฟๅญไธๆฌกๆจกๅใ
โผ TensorBoard ไฟกๆฏ ยท TensorBoard infomations
![d/total][anny-public-v1-rvc-d_total]![g/fm][anny-public-v1-rvc-g_fm]![g/kl][anny-public-v1-rvc-g_kl]![g/mel][anny-public-v1-rvc-g_mel]![g/total][anny-public-v1-rvc-g_total]
</details>
### ๅ
ถๅฎ ยท Others
ๆๆ ยท None for now
---
## ๐ค ๅผๆบ่ฎธๅฏๅ่ฎฎ ยท Licence
<div align="center">
้ค้ๅฆๆ่ฏดๆ๏ผๆฌไปๅบๅ
ๅฎน้็จ<br>Except where otherwise noted, the contents of this repository is licenced under a<br><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/"><img src="http://mirrors.creativecommons.org/presskit/buttons/88x31/png/by-nc-sa.png" alt="็ฅ่ฏๅ
ฑไบซ็ฝฒๅโ้ๅไธๆงไฝฟ็จโ็ธๅๆนๅผๅ
ฑไบซ 4.0 ๅฝ้
ๅ
ฌๅ
ฑ่ฎธๅฏๅ่ฎฎ๏ผCreative Commons Attribution 4.0 International Licence๏ผCC BY-NC-SA 4.0๏ผ" width="88" height="31" /></a>
</div>
[neuro-public-v1-rvc-final]: RVC_Models/Public-V1/Neuro-sama/Neuro-sama.pth
[neuro-public-v1-rvc]: https://huggingface.co/YoMonNPC/Clone-Neuroverse/tree/main/RVC_Models/Public-V1/Neuro-sama
[neuro-public-v1-rvc-mirror]: https://hf-mirror.com/YoMonNPC/Clone-Neuroverse/tree/main/RVC_Models/Public-V1/Neuro-sama
[neuro-public-v1-rvc-d_total]: RVC_Models/Public-V1/Neuro-sama/loss/d_total.png
[neuro-public-v1-rvc-g_fm]: RVC_Models/Public-V1/Neuro-sama/loss/g_fm.png
[neuro-public-v1-rvc-g_kl]: RVC_Models/Public-V1/Neuro-sama/loss/g_kl.png
[neuro-public-v1-rvc-g_mel]: RVC_Models/Public-V1/Neuro-sama/loss/g_mel.png
[neuro-public-v1-rvc-g_total]: RVC_Models/Public-V1/Neuro-sama/loss/g_total.png
[evil-public-v1-rvc-final]: RVC_Models/Public-V1/Evil_Neuro/Evil_Neuro.pth
[evil-public-v1-rvc]: https://huggingface.co/YoMonNPC/Clone-Neuroverse/tree/main/RVC_Models/Public-V1/Evil_Neuro
[evil-public-v1-rvc-mirror]: https://hf-mirror.com/YoMonNPC/Clone-Neuroverse/tree/main/RVC_Models/Public-V1/Evil_Neuro
[evil-public-v1-rvc-d_total]: RVC_Models/Public-V1/Evil_Neuro/loss/d_total.png
[evil-public-v1-rvc-g_fm]: RVC_Models/Public-V1/Evil_Neuro/loss/g_fm.png
[evil-public-v1-rvc-g_kl]: RVC_Models/Public-V1/Evil_Neuro/loss/g_kl.png
[evil-public-v1-rvc-g_mel]: RVC_Models/Public-V1/Evil_Neuro/loss/g_mel.png
[evil-public-v1-rvc-g_total]: RVC_Models/Public-V1/Evil_Neuro/loss/g_total.png
[vedal-public-v1-rvc-final]: RVC_Models/Public-V1/Vedal/Vedal.pth
[vedal-public-v1-rvc]: https://huggingface.co/YoMonNPC/Clone-Neuroverse/tree/main/RVC_Models/Public-V1/Vedal
[vedal-public-v1-rvc-mirror]: https://hf-mirror.com/YoMonNPC/Clone-Neuroverse/tree/main/RVC_Models/Public-V1/Vedal
[anny-public-v1-rvc-final]: RVC_Models/Public-V1/Anny/Anny.pth
[anny-public-v1-rvc]: https://huggingface.co/YoMonNPC/Clone-Neuroverse/tree/main/RVC_Models/Public-V1/Anny
[anny-public-v1-rvc-mirror]: https://hf-mirror.com/YoMonNPC/Clone-Neuroverse/tree/main/RVC_Models/Public-V1/Anny
[anny-public-v1-rvc-d_total]: RVC_Models/Public-V1/Anny/loss/d_total.png
[anny-public-v1-rvc-g_fm]: RVC_Models/Public-V1/Anny/loss/g_fm.png
[anny-public-v1-rvc-g_kl]: RVC_Models/Public-V1/Anny/loss/g_kl.png
[anny-public-v1-rvc-g_mel]: RVC_Models/Public-V1/Anny/loss/g_mel.png
[anny-public-v1-rvc-g_total]: RVC_Models/Public-V1/Anny/loss/g_total.png |
InduwaraR/qwen-ai-research-qa-q4_k_m.gguf | InduwaraR | 2025-05-24T08:42:29Z | 17 | 2 | null | [
"gguf",
"question-answering",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | question-answering | 2025-03-10T03:20:16Z | ---
license: mit
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: question-answering
---
# Qwen AI Research QA Model (Q4_K_M GGUF)
## Model Overview
The **Qwen AI Research QA Model** is designed for answering research-oriented AI questions with a focus on precision and depth. This model is optimized in the `Q4_K_M` format for efficient inference while maintaining high-quality responses.
## How to Use
To use this model with `llama-cpp-python`, follow these steps:
### Installation
Make sure you have `llama-cpp-python` installed:
```bash
pip install llama-cpp-python
```
### Loading the Model
```python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="InduwaraR/qwen-ai-research-qa-q4_k_m.gguf",
filename="qwen-ai-research-qa-q4_k_m.gguf",
)
```
### Generating a Response
```python
response = llm.create_chat_completion(
messages=[
{"role": "user", "content": "What are the latest advancements in AI research?"}
]
)
print(response)
```
## Model Details
- **Model Name**: Qwen AI Research QA
- **Format**: GGUF (Q4_K_M Quantization)
- **Primary Use Case**: AI research question answering
- **Inference Framework**: `llama-cpp-python`
- **Optimized for**: Running on local hardware with reduced memory usage
## License
This model is open-source and available under the **MIT License**.
## Acknowledgments
This model is hosted by **InduwaraR** on Hugging Face. Special thanks to the **Qwen AI team** for their contributions to AI research and development.
|
mehmet0001/ezcmt | mehmet0001 | 2025-05-24T08:41:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T08:41:25Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_LoRa_ACSEmployment_2_cfda_ep8_22 | MinaMila | 2025-05-24T08:40:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T08:40:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_3b_LoRa_ACSEmployment_2_cfda_ep9_22 | MinaMila | 2025-05-24T08:40:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T08:40:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
smartmyapp/Ladji4-4 | smartmyapp | 2025-05-24T08:39:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-23T21:14:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rakibulnahin/travel-chat-llama2-7b-lora-4bit-finetuned | rakibulnahin | 2025-05-24T08:39:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2025-05-24T08:39:02Z | ---
base_model: meta-llama/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
robertou2/task-9-microsoft-Phi-3.5-mini-instruct | robertou2 | 2025-05-24T08:38:11Z | 1,393 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-05-13T16:57:03Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
VIDEO-18-Katrina-Lim-Viral-Kiffy-VIDEOS/FULL.VIDEO.LINK.Katrina.Lim.Viral.Video.Leaks.Official | VIDEO-18-Katrina-Lim-Viral-Kiffy-VIDEOS | 2025-05-24T08:37:24Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T08:37:02Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
khangnguyen1287/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mammalian_rugged_caterpillar | khangnguyen1287 | 2025-05-24T08:34:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mammalian rugged caterpillar",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T06:52:38Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mammalian_rugged_caterpillar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mammalian rugged caterpillar
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mammalian_rugged_caterpillar
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="khangnguyen1287/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-mammalian_rugged_caterpillar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rachitv/llama-3.2-3B-2.0 | rachitv | 2025-05-24T08:32:34Z | 0 | 0 | null | [
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T08:03:41Z | ---
license: apache-2.0
---
|
kitten-kitkat/unsloth-qwen14b | kitten-kitkat | 2025-05-24T08:30:03Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-05-24T08:01:15Z | ---
license: mit
tags:
- unsloth
---
|
Nourix44/Nourix232224 | Nourix44 | 2025-05-24T08:29:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T08:27:10Z | Nourix est un complรฉment ร base de plantes de qualitรฉ supรฉrieure conรงu pour favoriser la gestion naturelle du poids et le bien-รชtre gรฉnรฉral. Conรงu pour ceux qui recherchent une approche รฉquilibrรฉe de la santรฉ, il combine des ingrรฉdients scientifiquement prouvรฉs qui stimulent le mรฉtabolisme, suppriment l'appรฉtit, augmentent l'รฉnergie et favorisent la dรฉtoxification.
##**[Cliquez ici pour commander sur le site officiel de Nourix](https://nourixfrance.com/)**
## Nourix : Plus qu'une pilule
Nourix est essentiellement un complรฉment ร base de plantes conรงu pour soutenir la gestion du poids en augmentant le mรฉtabolisme, en supprimant l'appรฉtit et en amรฉliorant les niveaux d'รฉnergie. Mais ce qui le distingue des autres, cโest son image de marque comme un choix de vie holistique, et non pas seulement comme une solution rapide. La marque est commercialisรฉe via des sites Web รฉlรฉgants qui rรฉpondent au dรฉsir du consommateur moderne en matiรจre d'authenticitรฉ, de durabilitรฉ et de soins personnels. Sa formule vรฉgรฉtalienne, sans gluten et sans OGM trouve un รฉcho auprรจs dโune gรฉnรฉration qui privilรฉgie les modes de vie sains.
Nourix se positionne comme un partenaire dans un parcours de santรฉ plus large, encourageant les utilisateurs ร adopter une alimentation consciente, un exercice joyeux et un bien-รชtre mental. Cette philosophie en a fait une rรฉfรฉrence culturelle, notamment en France, oรน les tendances bien-รชtre entrent souvent en collision avec les traditions culinaires et les sensibilitรฉs esthรฉtiques.
## Ingrรฉdients : une combinaison de nature et d'innovation
La formule Nourix est une lettre dโamour ร la nature, alliant plantes anciennes et science nutritionnelle moderne. Chaque ingrรฉdient est choisi non seulement pour son efficacitรฉ mais aussi pour sa rรฉsonance culturelle, รฉvoquant un sentiment dโhรฉritage et de confiance. Voici un autre aperรงu de ses composants clรฉsย :
Extrait de thรฉ vert : un hommage aux anciennes pratiques de santรฉ asiatiques. Les catรฉchines contenues dans le thรฉ vert dรฉclenchent la thermogenรจse et aident ร brรปler des calories. Ses propriรฉtรฉs antioxydantes correspondent ร la prรฉfรฉrence des Franรงais pour la longรฉvitรฉ et lโรฉclat.
Berbรฉrine : Extraite de la berbรฉrine, la berbรฉrine fait partie d'un changement global vers la santรฉ mรฉtabolique et plaรฎt ร ceux qui se mรฉfient de la prise de poids induite par le sucre.
Gingembre : Un ingrรฉdient important dans la cuisine franรงaise et la phytothรฉrapie. L'effet rรฉchauffant du gingembre amรฉliore le mรฉtabolisme et facilite la digestion, procurant aux utilisateurs des sensations gustatives familiรจres.
Cannelle : La cannelle crรฉe une atmosphรจre chaleureuse et stimulante, freine les envies et stabilise les niveaux de glucose, ce qui en fait un pont entre le plaisir et la discipline.
Vinaigre de cidre de pomme : Ce vรฉritable trรฉsor des influenceurs bien-รชtre est un ingrรฉdient coupe-faim qui rรฉsonne avec la tendance ยซ aliments fonctionnels ยป sur les rรฉseaux sociaux.
Poivre de Cayenne : Les propriรฉtรฉs thermogรฉniques du poivre de Cayenne apportent une touche รฉpicรฉe et conviennent ร un style de vie audacieux et aventureux qui plaรฎt ร ceux qui recherchent l'intensitรฉ.
Chardon-Marie : Le chardon-Marie trouve ses racines dans la phytothรฉrapie europรฉenne. Il soutient la santรฉ du foie et fait partie de lโengouement pour la dรฉtox qui domine la culture de la santรฉ.
Ces ingrรฉdients sont prรฉsentรฉs sous forme de deux capsules quotidiennes, ร prendre avec de lโeau au cours dโun repas. La teneur modรฉrรฉe en cafรฉine du produit (30 mg par portion) procure un regain d'รฉnergie doux et รฉvite la surstimulation typique des produits concurrents.
## L'impact culturel de Nourix
Nourix a transcendรฉ son rรดle de complรฉment alimentaire et est devenu un phรฉnomรจne culturel, notamment en France, oรน il s'inscrit parfaitement dans les tendances bien-รชtre et lifestyle. Voici comment cela a fait sensation :
Mรฉdias sociaux et culture d'influence : sur des plateformes comme Instagram et X, Nourix est un hashtag favori, les utilisateurs partageant des dรฉmos esthรฉtiques de leurs capsules aux cรดtรฉs de bols de smoothie et de tapis de yoga. Les influenceurs, des gourous du fitness parisiens aux coachs holistiques provenรงaux, utilisent Nourix dans le cadre de leurs ยซ routines bien-รชtre chics ยป, renforรงant ainsi son attrait.
##**[Cliquez ici pour commander sur le site officiel de Nourix](https://nourixfrance.com/)**
Crรฉer une communautรฉ : la marque favorise un sentiment dโappartenance ร travers des forums en ligne et des groupes de mรฉdias sociaux oรน les utilisateurs partagent des recettes, des conseils dโentraรฎnement et des histoires de rรฉussite. Cette approche communautaire reflรจte la tradition franรงaise des repas en commun, remise au goรปt du jour ร lโรจre du numรฉrique.
Positivitรฉ corporelle et rรฉalisme : contrairement aux marques agressives de perte de poids, Nourix a un rรฉcit รฉquilibrรฉ qui place la santรฉ avant la perfection. Son marketing met en avant divers types de corps et des histoires qui rรฉsonnent avec le changement mondial vers le bien-รชtre inclusif.
Buzz de la culture pop : les rumeurs sur l'implication de Nourix dans des sรฉries tรฉlรฉvisรฉes franรงaises comme 66 Minutes de M6 โ bien que non confirmรฉes โ ont alimentรฉ sa mystique et l'ont positionnรฉ comme une figure ยซ au courant ยป parmi les faiseurs de goรปt.
Cette rรฉsonance culturelle a fait de Nourix une marque lifestyle comparable au fait de porter un sac rรฉutilisable ou de siroter du lait dโavoine. Il ne sโagit pas seulement de perdre du poidsย ; Il sโagit dโun style de vie conscient et dynamique.
## Futur : L'avenir de Nourix
ร mesure que Nourix se dรฉveloppe, son potentiel rรฉside dans lโapprofondissement de ses racines culturelles et la rรฉsolution des problรจmes de confiance. Les fonctionnalitรฉs possibles incluentย :
Une meilleure transparence : la publication dโinformations dโachat claires, de certificats de laboratoire ou dโune adresse physique peut faire taire les sceptiques.
Dรฉveloppez votre communautรฉ : organiser des รฉvรฉnements fitness ou รฉtablir des partenariats avec des salles de sport franรงaises peut amener votre communautรฉ numรฉrique hors ligne.
Innovation : Lโintroduction de nouvelles formes, comme la poudre ou le chewing-gum, peut attirer les jeunes utilisateurs.
Pression mondiale : se dรฉvelopper au-delร de la France grรขce au marketing local peut profiter ร des marchรฉs comme les รtats-Unis ou lโAsie.
## Rรฉflexions finales
Nourix est plus quโun complรฉment de gestion du poids : cโest un mouvement culturel qui allie science, style et communautรฉ. Sa formule naturelle, ร base d'ingrรฉdients tels que le thรฉ vert et la berbรฉrine, offre un outil pratique pour ceux qui souhaitent vivre un mode de vie plus sain. Son influence culturelle, de lโesthรฉtique dโInstagram aux forums axรฉs sur les utilisateurs, en fait un phare du bien-รชtre moderne.
##**[Cliquez ici pour commander sur le site officiel de Nourix](https://nourixfrance.com/)**
|
gghfez/Electra_Elorablate_Lora_v0.1-F16-GGUF | gghfez | 2025-05-24T08:29:02Z | 0 | 0 | peft | [
"peft",
"gguf",
"llama-cpp",
"gguf-my-lora",
"base_model:e-n-v-y/Electra_Elorablate_Lora_v0.1",
"base_model:adapter:e-n-v-y/Electra_Elorablate_Lora_v0.1",
"region:us"
] | null | 2025-05-24T08:29:00Z | ---
base_model: e-n-v-y/Electra_Elorablate_Lora_v0.1
library_name: peft
tags:
- llama-cpp
- gguf-my-lora
---
# gghfez/Electra_Elorablate_Lora_v0.1-F16-GGUF
This LoRA adapter was converted to GGUF format from [`e-n-v-y/Electra_Elorablate_Lora_v0.1`](https://huggingface.co/e-n-v-y/Electra_Elorablate_Lora_v0.1) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/e-n-v-y/Electra_Elorablate_Lora_v0.1) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Electra_Elorablate_Lora_v0.1-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Electra_Elorablate_Lora_v0.1-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
tim1900/bert-chunker-3 | tim1900 | 2025-05-24T08:25:29Z | 1,273 | 0 | null | [
"safetensors",
"bert",
"token-classification",
"en",
"zh",
"license:mit",
"region:us"
] | token-classification | 2025-02-09T08:26:51Z | ---
license: mit
language:
- en
- zh
pipeline_tag: token-classification
---
# bert-chunker-3
[GitHub](https://github.com/jackfsuia/bert-chunker/tree/main/bc3)
bert-chunker-3 is a text chunker based on BertForTokenClassification to predict the start token of chunks (for use in RAG, etc), and using a sliding window it cuts documents of any size into chunks. We see it as an alternative of [Kamradt semantic chunker](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb), but specially, it not only works for the structured texts, but also the **unstructured and messy texts**.
Different from [bc-2](https://huggingface.co/tim1900/bert-chunker-2) and [bc](https://huggingface.co/tim1900/bert-chunker), to overcome the data distribution shift, our training data were labeled by a LLM and trainng pipeline was improved, therefore it is **more stable**. It has a **competitive** [**performance**](#evaluation).
Updates :
- 2025.5.12: an experimental script that **supports specifying the maximum tokens per chunk** is available now [below](#experimental), evaluation is at [**evaluation**](#evaluation).
## Usage
Run the following:
```python
import torch
from transformers import AutoTokenizer, BertForTokenClassification
import math
model_path = "tim1900/bert-chunker-3"
tokenizer = AutoTokenizer.from_pretrained(
model_path,
padding_side="right",
model_max_length=255,
trust_remote_code=True,
)
device = "cpu" # or 'cuda'
model = BertForTokenClassification.from_pretrained(
model_path,
).to(device)
def chunk_text(model, text, tokenizer, prob_threshold=0.5):
# slide context window chunking
MAX_TOKENS = 255
tokens = tokenizer(text, return_tensors="pt", truncation=False)
input_ids = tokens["input_ids"]
attention_mask = tokens["attention_mask"][:, 0:MAX_TOKENS]
attention_mask = attention_mask.to(model.device)
CLS = input_ids[:, 0].unsqueeze(0)
SEP = input_ids[:, -1].unsqueeze(0)
input_ids = input_ids[:, 1:-1]
model.eval()
split_str_poses = []
token_pos = []
windows_start = 0
windows_end = 0
logits_threshold = math.log(1 / prob_threshold - 1)
print(f"Processing {input_ids.shape[1]} tokens...")
while windows_end <= input_ids.shape[1]:
windows_end = windows_start + MAX_TOKENS - 2
ids = torch.cat((CLS, input_ids[:, windows_start:windows_end], SEP), 1)
ids = ids.to(model.device)
output = model(
input_ids=ids,
attention_mask=torch.ones(1, ids.shape[1], device=model.device),
)
logits = output["logits"][:, 1:-1, :]
chunk_decision = logits[:, :, 1] > (logits[:, :, 0] - logits_threshold)
greater_rows_indices = torch.where(chunk_decision)[1].tolist()
# null or not
if len(greater_rows_indices) > 0 and (
not (greater_rows_indices[0] == 0 and len(greater_rows_indices) == 1)
):
split_str_pos = [
tokens.token_to_chars(sp + windows_start + 1).start
for sp in greater_rows_indices
if sp > 0
]
token_pos += [
sp + windows_start for sp in greater_rows_indices if sp > 0
]
split_str_poses += split_str_pos
windows_start = greater_rows_indices[-1] + windows_start
else:
windows_start = windows_end
substrings = [
text[i:j] for i, j in zip([0] + split_str_poses, split_str_poses + [len(text)])
]
token_pos = [0] + token_pos
return substrings, token_pos
# chunking code docs
print("\n>>>>>>>>> Chunking code docs...")
doc = r"""
Of course, as our first example shows, it is not always _necessary_ to declare an expression holder before it is created or used. But doing so provides an extra measure of clarity to models, so we strongly recommend it.
## Chapter 4 The Basics
## Chapter 5 The DCP Ruleset
### 5.1 A taxonomy of curvature
In disciplined convex programming, a scalar expression is classified by its _curvature_. There are four categories of curvature: _constant_, _affine_, _convex_, and _concave_. For a function \(f:\mathbf{R}^{n}\rightarrow\mathbf{R}\) defined on all \(\mathbf{R}^{n}\)the categories have the following meanings:
\[\begin{array}{llll}\text{constant}&f(\alpha x+(1-\alpha)y)=f(x)&\forall x,y\in \mathbf{R}^{n},\;\alpha\in\mathbf{R}\\ \text{affine}&f(\alpha x+(1-\alpha)y)=\alpha f(x)+(1-\alpha)f(y)&\forall x,y\in \mathbf{R}^{n},\;\alpha\in\mathbf{R}\\ \text{convex}&f(\alpha x+(1-\alpha)y)\leq\alpha f(x)+(1-\alpha)f(y)&\forall x,y \in\mathbf{R}^{n},\;\alpha\in[0,1]\\ \text{concave}&f(\alpha x+(1-\alpha)y)\geq\alpha f(x)+(1-\alpha)f(y)&\forall x,y \in\mathbf{R}^{n},\;\alpha\in[0,1]\end{array}\]
Of course, there is significant overlap in these categories. For example, constant expressions are also affine, and (real) affine expressions are both convex and concave.
Convex and concave expressions are real by definition. Complex constant and affine expressions can be constructed, but their usage is more limited; for example, they cannot appear as the left- or right-hand side of an inequality constraint.
### Top-level rules
CVX supports three different types of disciplined convex programs:
* A _minimization problem_, consisting of a convex objective function and zero or more constraints.
* A _maximization problem_, consisting of a concave objective function and zero or more constraints.
* A _feasibility problem_, consisting of one or more constraints and no objective.
### Constraints
Three types of constraints may be specified in disciplined convex programs:
* An _equality constraint_, constructed using \(==\), where both sides are affine.
* A _less-than inequality constraint_, using \(<=\), where the left side is convex and the right side is concave.
* A _greater-than inequality constraint_, using \(>=\), where the left side is concave and the right side is convex.
_Non_-equality constraints, constructed using \(\sim=\), are never allowed. (Such constraints are not convex.)
One or both sides of an equality constraint may be complex; inequality constraints, on the other hand, must be real. A complex equality constraint is equivalent to two real equality constraints, one for the real part and one for the imaginary part. An equality constraint with a real side and a complex side has the effect of constraining the imaginary part of the complex side to be zero."""
# Chunk the text. The prob_threshold should be between (0, 1). The lower it is, the more chunks will be generated.
# Therefore adjust it to your need, when prob_threshold is small like 0.000001, each token is one chunk,
# when it is set to 1, the whole text will be one chunk.
chunks, token_pos = chunk_text(model, doc, tokenizer, prob_threshold=0.5)
# print chunks
for i, (c, t) in enumerate(zip(chunks, token_pos)):
print(f"-----chunk: {i}----token_idx: {t}--------")
print(c)
# chunking ads
print("\n>>>>>>>>> Chunking ads...")
ad = r"""The causes and effects of dropouts in vocational and professional education are more pressing than ever. A decreasing attractiveness of vocational education, particularly in payment and quality, causes higher dropout rates while hitting ongoing demographic changes resulting in extensive skill shortages for many regions. Therefore, tackling the internationally high dropout rates is of utmost political and scientific interest. This thematic issue contributes to the conceptualization, analysis, and prevention of vocational and professional dropouts by bringing together current research that progresses to a deeper processual understanding and empirical modelling of dropouts. It aims to expand our understanding of how dropout and decision processes leading to dropout can be conceptualized and measured in vocational and professional contexts. Another aim is to gather empirical studies on both predictors and dropout consequences. Based on this knowledge, the thematic issue intends to provide evidence of effective interventions to avoid dropouts and identify promising ways for future dropout research in professional and vocational education to support evidence-based vocational education policy.
We thus welcome research contributions (original empirical and conceptual/measurement-related articles, literature reviews, meta-analyses) on dropouts (e.g., premature terminations, intentions to terminate, vertical and horizontal dropouts) that are situated in vocational and professional education at workplaces, schools, or other tertiary professional education institutions.
Part 1 of the thematic series outlines central theories and measurement concepts for vocational and professional dropouts. Part 2 outlines measurement approaches for dropout. Part 3 investigates relevant predictors of dropout. Part 4 analyzes the effects of dropout on an individual, organizational, and systemic level. Part 5 deals with programs and interventions for the prevention of dropouts.
We welcome papers that include but are not limited to:
Theoretical papers on the concept and processes of vocational and professional dropout or retention
Measurement approaches to assess dropout or retention
Quantitative and qualitative papers on the causes of dropout or retention
Quantitative and qualitative papers on the effects of dropout or retention on learners, providers/organizations and the (educational) system
Design-based research and experimental papers on dropout prevention programs or retention
Submission instructions
Before submitting your manuscript, please ensure you have carefully read the Instructions for Authors for Empirical Research in Vocational Education and Training. The complete manuscript should be submitted through the Empirical Research in Vocational Education and Training submission system. To ensure that you submit to the correct thematic series please select the appropriate section in the drop-down menu upon submission. In addition, indicate within your cover letter that you wish your manuscript to be considered as part of the thematic series on series title. All submissions will undergo rigorous peer review, and accepted articles will be published within the journal as a collection.
Lead Guest Editor:
Prof. Dr. Viola Deutscher, University of Mannheim
[email protected]
Guest Editors:
Prof. Dr. Stefanie Findeisen, University of Konstanz
[email protected]
Prof. Dr. Christian Michaelis, Georg-August-University of Gรถttingen
[email protected]
Deadline for submission
This Call for Papers is open from now until 29 February 2023. Submitted papers will be reviewed in a timely manner and published directly after acceptance (i.e., without waiting for the accomplishment of all other contributions). Thanks to the Empirical Research in Vocational Education and Training (ERVET) open access policy, the articles published in this thematic issue will have a wide, global audience.
Option of submitting abstracts: Interested authors should submit a letter of intent including a working title for the manuscript, names, affiliations, and contact information for all authors, and an abstract of no more than 500 words to the lead guest editor Viola Deutscher ([email protected]) by July, 31st 2023. Due to technical issues, we also ask authors who already submitted an abstract before May, 30th to send their abstracts again to the address stated above. However, abstract submission is optional and is not mandatory for the full paper submission.
Different dropout directions in vocational education and training: the role of the initiating party and traineesโ reasons for dropping out
The high rates of premature contract termination (PCT) in vocational education and training (VET) programs have led to an increasing number of studies examining the reasons why adolescents drop out. Since adol...
Authors:Christian Michaelis and Stefanie Findeisen
Citation:Empirical Research in Vocational Education and Training 2024 16:15
Content type:Research
Published on: 6 August 2024"
"""
# Chunk the text. The prob_threshold should be between (0, 1). The lower it is, the more chunks will be generated.
# Therefore adjust it to your need, when prob_threshold is small like 0.000001, each token is one chunk,
# when it is set to 1, the whole text will be one chunk.
chunks, token_pos = chunk_text(model, ad, tokenizer, prob_threshold=0.5)
# print chunks
for i, (c, t) in enumerate(zip(chunks, token_pos)):
print(f"-----chunk: {i}----token_idx: {t}--------")
print(c)
```
## Experimental
The following script supports specifying max tokens per chunk. Chunker will be forced to choose a best possible position from history to chunk when it is about to exceed the max_tokens_per_chunk and no token satisfy the prob_threshold. This script can be seen as a new experimental version of the scripts above.
```python
import torch
from transformers import AutoTokenizer, BertForTokenClassification
import math
model_path = "tim1900/bert-chunker-3"
tokenizer = AutoTokenizer.from_pretrained(
model_path,
padding_side="right",
model_max_length=255,
trust_remote_code=True,
)
device = "cpu" # or 'cuda'
model = BertForTokenClassification.from_pretrained(
model_path,
).to(device)
def chunk_text_with_max_chunk_size(model, text, tokenizer, prob_threshold=0.5,max_tokens_per_chunk = 400):
with torch.no_grad():
# slide context window chunking
MAX_TOKENS = 255
tokens = tokenizer(text, return_tensors="pt", truncation=False)
input_ids = tokens["input_ids"]
attention_mask = tokens["attention_mask"][:, 0:MAX_TOKENS]
attention_mask = attention_mask.to(model.device)
CLS = input_ids[:, 0].unsqueeze(0)
SEP = input_ids[:, -1].unsqueeze(0)
input_ids = input_ids[:, 1:-1]
model.eval()
split_str_poses = []
token_pos = []
windows_start = 0
windows_end = 0
logits_threshold = math.log(1 / prob_threshold - 1)
unchunk_tokens = 0
backup_pos = None
best_logits = torch.finfo(torch.float32).min
STEP = round(((MAX_TOKENS - 2)//2)*1.75 )
print(f"Processing {input_ids.shape[1]} tokens...")
# while windows_end <= input_ids.shape[1]:#่ฎฐๅพๆนๆwindstart
while windows_start < input_ids.shape[1]:#่ฎฐๅพๆนๆwindstart
windows_end = windows_start + MAX_TOKENS - 2
ids = torch.cat((CLS, input_ids[:, windows_start:windows_end], SEP), 1)
ids = ids.to(model.device)
output = model(
input_ids=ids,
attention_mask=torch.ones(1, ids.shape[1], device=model.device),
)
logits = output["logits"][:, 1:-1, :]
logit_diff = logits[:, :, 1] - logits[:, :, 0]
chunk_decision = logit_diff > - logits_threshold
greater_rows_indices = torch.where(chunk_decision)[1].tolist()
# null or not
if len(greater_rows_indices) > 0 and (
not (greater_rows_indices[0] == 0 and len(greater_rows_indices) == 1)
):
unchunk_tokens_this_window = greater_rows_indices[0] if greater_rows_indices[0]!=0 else greater_rows_indices[1]#exclude the fist index
# manually chunk
if unchunk_tokens + unchunk_tokens_this_window > max_tokens_per_chunk:
big_windows_end = max_tokens_per_chunk - unchunk_tokens
max_value, max_index= logit_diff[:,1:big_windows_end].max(), logit_diff[:,1:big_windows_end].argmax() + 1
if best_logits < max_value:
backup_pos = windows_start + max_index
windows_start = backup_pos
split_str_pos = [tokens.token_to_chars(backup_pos + 1).start]
split_str_poses = split_str_poses + split_str_pos
token_pos = token_pos + [backup_pos]
best_logits = torch.finfo(torch.float32).min
backup_pos = -1
unchunk_tokens = 0
# auto chunk
else:
if len(greater_rows_indices) >= 2:
for gi, (gri0,gri1) in enumerate(zip(greater_rows_indices[:-1],greater_rows_indices[1:])):
if gri1 - gri0 > max_tokens_per_chunk:
greater_rows_indices=greater_rows_indices[:gi+1]
break
split_str_pos = [tokens.token_to_chars(sp + windows_start + 1).start for sp in greater_rows_indices if sp > 0]
split_str_poses = split_str_poses + split_str_pos
token_pos = token_pos+ [sp + windows_start for sp in greater_rows_indices if sp > 0]
windows_start = greater_rows_indices[-1] + windows_start
best_logits = torch.finfo(torch.float32).min
backup_pos = -1
unchunk_tokens = 0
else:
# unchunk_tokens_this_window = min(windows_end - windows_start,STEP)
unchunk_tokens_this_window = min(windows_start+STEP,input_ids.shape[1]) - windows_start
# manually chunk
if unchunk_tokens + unchunk_tokens_this_window > max_tokens_per_chunk:
big_windows_end = max_tokens_per_chunk - unchunk_tokens
if logit_diff.shape[1] > 1:
max_value, max_index= logit_diff[:,1:big_windows_end].max(), logit_diff[:,1:big_windows_end].argmax() + 1
if best_logits < max_value:
backup_pos = windows_start + max_index
windows_start = backup_pos
split_str_pos = [tokens.token_to_chars(backup_pos + 1).start]
split_str_poses = split_str_poses + split_str_pos
token_pos = token_pos + [backup_pos]
best_logits = torch.finfo(torch.float32).min
backup_pos = -1
unchunk_tokens = 0
else:
# auto leave
if logit_diff.shape[1] > 1:
max_value, max_index= logit_diff[:,1:].max(), logit_diff[:,1:].argmax() + 1
if best_logits < max_value:
best_logits = max_value
backup_pos = windows_start + max_index
unchunk_tokens = unchunk_tokens + STEP
windows_start = windows_start + STEP
substrings = [
text[i:j] for i, j in zip([0] + split_str_poses, split_str_poses + [len(text)])
]
token_pos = [0] + token_pos
return substrings, token_pos
# chunking ads
print("\n>>>>>>>>> Chunking ads...")
ad = r"""The causes and effects of dropouts in vocational and professional education are more pressing than ever. A decreasing attractiveness of vocational education, particularly in payment and quality, causes higher dropout rates while hitting ongoing demographic changes resulting in extensive skill shortages for many regions. Therefore, tackling the internationally high dropout rates is of utmost political and scientific interest. This thematic issue contributes to the conceptualization, analysis, and prevention of vocational and professional dropouts by bringing together current research that progresses to a deeper processual understanding and empirical modelling of dropouts. It aims to expand our understanding of how dropout and decision processes leading to dropout can be conceptualized and measured in vocational and professional contexts. Another aim is to gather empirical studies on both predictors and dropout consequences. Based on this knowledge, the thematic issue intends to provide evidence of effective interventions to avoid dropouts and identify promising ways for future dropout research in professional and vocational education to support evidence-based vocational education policy.
We thus welcome research contributions (original empirical and conceptual/measurement-related articles, literature reviews, meta-analyses) on dropouts (e.g., premature terminations, intentions to terminate, vertical and horizontal dropouts) that are situated in vocational and professional education at workplaces, schools, or other tertiary professional education institutions.
Part 1 of the thematic series outlines central theories and measurement concepts for vocational and professional dropouts. Part 2 outlines measurement approaches for dropout. Part 3 investigates relevant predictors of dropout. Part 4 analyzes the effects of dropout on an individual, organizational, and systemic level. Part 5 deals with programs and interventions for the prevention of dropouts.
We welcome papers that include but are not limited to:
Theoretical papers on the concept and processes of vocational and professional dropout or retention
Measurement approaches to assess dropout or retention
Quantitative and qualitative papers on the causes of dropout or retention
Quantitative and qualitative papers on the effects of dropout or retention on learners, providers/organizations and the (educational) system
Design-based research and experimental papers on dropout prevention programs or retention
Submission instructions
Before submitting your manuscript, please ensure you have carefully read the Instructions for Authors for Empirical Research in Vocational Education and Training. The complete manuscript should be submitted through the Empirical Research in Vocational Education and Training submission system. To ensure that you submit to the correct thematic series please select the appropriate section in the drop-down menu upon submission. In addition, indicate within your cover letter that you wish your manuscript to be considered as part of the thematic series on series title. All submissions will undergo rigorous peer review, and accepted articles will be published within the journal as a collection.
Lead Guest Editor:
Prof. Dr. Viola Deutscher, University of Mannheim
[email protected]
Guest Editors:
Prof. Dr. Stefanie Findeisen, University of Konstanz
[email protected]
Prof. Dr. Christian Michaelis, Georg-August-University of Gรถttingen
[email protected]
Deadline for submission
This Call for Papers is open from now until 29 February 2023. Submitted papers will be reviewed in a timely manner and published directly after acceptance (i.e., without waiting for the accomplishment of all other contributions). Thanks to the Empirical Research in Vocational Education and Training (ERVET) open access policy, the articles published in this thematic issue will have a wide, global audience.
Option of submitting abstracts: Interested authors should submit a letter of intent including a working title for the manuscript, names, affiliations, and contact information for all authors, and an abstract of no more than 500 words to the lead guest editor Viola Deutscher ([email protected]) by July, 31st 2023. Due to technical issues, we also ask authors who already submitted an abstract before May, 30th to send their abstracts again to the address stated above. However, abstract submission is optional and is not mandatory for the full paper submission.
Different dropout directions in vocational education and training: the role of the initiating party and traineesโ reasons for dropping out
The high rates of premature contract termination (PCT) in vocational education and training (VET) programs have led to an increasing number of studies examining the reasons why adolescents drop out. Since adol...
Authors:Christian Michaelis and Stefanie Findeisen
Citation:Empirical Research in Vocational Education and Training 2024 16:15
Content type:Research
Published on: 6 August 2024"
"""
# Chunk the text. The prob_threshold should be between (0, 1). The lower it is, the more chunks will be generated.
# Therefore adjust it to your need, when prob_threshold is small like 0.000001, each token is one chunk,
# when it is set to 1, the whole text will be one chunk, and, will be forced to choose a best possible position to chunk when it is about to exceed the max_tokens_per_chunk and no token satisfy the prob_threshold.
chunks, token_pos = chunk_text_with_max_chunk_size(model, ad, tokenizer, prob_threshold=0.5, max_tokens_per_chunk = 400)
# print chunks
for i, (c, t) in enumerate(zip(chunks, token_pos)):
print(f"-----chunk: {i}----token_idx: {t}--------")
print(c)
```
## Evaluation
The following RAG evaluation is done by code from [brandonstarxel/chunking_evaluation](https://github.com/brandonstarxel/chunking_evaluation), most of the following results come from [Evaluating Chunking Strategies for Retrieval](https://research.trychroma.com/evaluating-chunking).
| Chunking | Size| Overlap | Recall | Precision | Precisionฮฉ | IoU | Time complexity by token number N | Is max chunk size strictly controlable|
|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| Recursive | <= 800 | 400 | 85.4 ยฑ 34.9 | 1.5 ยฑ 1.3 | 6.7 ยฑ 5.2 | 1.5 ยฑ 1.3| **O(N)** | **Yes**
| TokenText | 800 | 400 | 87.9 ยฑ 31.7| 1.4 ยฑ 1.1 | 4.7 ยฑ 3.1 | 1.4 ยฑ 1.1 |**O(N)** | **Yes**
| Recursive | <= 400 | 200 |88.1 ยฑ 31.6 | 3.3 ยฑ 2.7 | 13.9 ยฑ 10.4 | 3.3 ยฑ 2.7 | **O(N)** | **Yes**
| TokenText | 400 | 200 | 88.6 ยฑ 29.7 | 2.7 ยฑ 2.2 | 8.4 ยฑ 5.1 | 2.7 ยฑ 2.2 | **O(N)** | **Yes**
| Recursive | <= 400 | 0 | 89.5 ยฑ 29.7 | 3.6 ยฑ 3.2 | 17.7 ยฑ 14.0 | 3.6 ยฑ 3.2 | **O(N)** | **Yes**
| TokenText | 400 |0 | 89.2 ยฑ 29.2 | 2.7 ยฑ 2.2 | 12.5 ยฑ 8.1 | 2.7 ยฑ 2.2 | **O(N)** | **Yes**
| Recursive | <= 200 | 0 | 88.1 ยฑ 30.1 | 7.0 ยฑ 5.6 | 29.9 ยฑ 18.4 | 6.9 ยฑ 5.6 | **O(N)** | **Yes**
| TokenText | 200 | 0 | 87.0 ยฑ 30.8 | 5.2 ยฑ 4.1 | 21.0 ยฑ 11.9 | 5.1 ยฑ 4.1 | **O(N)** | **Yes**
| Kamradt | N/A (~660) | 0 | 83.6 ยฑ 36.8 | 1.5 ยฑ 1.6 | 7.4 ยฑ 10.2 | 1.5 ยฑ 1.6 | **O(N)** | No
KamradtMod | <= 300 | 0 | 87.1 ยฑ 31.9 | 2.1 ยฑ 2.0 | 10.5 ยฑ 12.3 | 2.1 ยฑ 2.0 | **O(N)**| **Yes**
Cluster | 400 (~182) | 0 | 91.3 ยฑ 25.4 | 4.5 ยฑ 3.4 | 20.7 ยฑ 14.5 | 4.5 ยฑ 3.4 | O(N<sup>2</sup>)| No
Cluster | 200 (~103) | 0 | 87.3 ยฑ 29.8 | **8.0 ยฑ 6.0** | **34.0 ยฑ 19.7** | **8.0 ยฑ 6.0** | O(N<sup>2</sup>)| No
LLM (GPT4o) | N/A (~240) | 0 | **91.9 ยฑ 26.5** | 3.9 ยฑ 3.2 | 19.9 ยฑ 16.3 | 3.9 ยฑ 3.2 | O(N<sup>2</sup>)| No
semchunk | <=400 | 0 | 90.0 ยฑ 29.1 | 3.6 ยฑ 2.8 | 17.3 ยฑ 12.6 | 3.6 ยฑ 2.8 | **O(N)**| **Yes**
semchunk | <=200 | 0 | 89.3 ยฑ 28.7 | 6.8 ยฑ 5.2 | 28.9 ยฑ 17.1 | 6.7 ยฑ 5.1 | **O(N)**| **Yes**
โ
bert-chunker-3 (experimental, prob_threshold=0.50543) | <= 400 | 0 | 91.3 ยฑ 26.6 | 5.4 ยฑ 4.7 | 23.1 ยฑ 17.6 | 5.4 ยฑ 4.7 |**O(N)** | **Yes**
โ
bert-chunker-3 (experimental, prob_threshold=0.50543) | <= 200 | 0 | 89.7 ยฑ 27.9 | 7.6 ยฑ 6.0 | 30.9 ยฑ 19.1 | 7.7 ยฑ 5.8 |**O(N)**| **Yes**
โ
bert-chunker-3 (prob_threshold=0.50543) | N/A | 0 | 90.4 ยฑ 28.7 | 3.3 ยฑ 3.1 | 16.0 ยฑ 17.0 | 3.3 ยฑ 3.1 |**O(N)**| No
## Citation
```bibtex
@article{bert-chunker,
title={bert-chunker: Efficient and Trained Chunking for Unstructured Documents},
author={Yannan Luo},
year={2024},
url={https://github.com/jackfsuia/bert-chunker}
}
```
Base model is from [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). |
tscstudios/qymi0imrdzzj3ryhdijwarixgri1_9105ff9d-108f-49f1-8359-502893e0ce23 | tscstudios | 2025-05-24T08:23:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-24T08:23:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Qymi0Imrdzzj3Ryhdijwarixgri1_9105Ff9D 108F 49F1 8359 502893E0Ce23
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/qymi0imrdzzj3ryhdijwarixgri1_9105ff9d-108f-49f1-8359-502893e0ce23/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/qymi0imrdzzj3ryhdijwarixgri1_9105ff9d-108f-49f1-8359-502893e0ce23', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/qymi0imrdzzj3ryhdijwarixgri1_9105ff9d-108f-49f1-8359-502893e0ce23/discussions) to add images that show off what youโve made with this LoRA.
|
meimmo/trained-flux-lora-chanel | meimmo | 2025-05-24T08:21:56Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-23T16:58:52Z | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: a photo of dress in Chanel style by Karl Lagerfeld from the years
1983 to 2019
widget:
- text: a photo of dress in Chanel style by Karl Lagerfeld from the years 1983 to
2019
output:
url: image_0.png
- text: a photo of dress in Chanel style by Karl Lagerfeld from the years 1983 to
2019
output:
url: image_1.png
- text: a photo of dress in Chanel style by Karl Lagerfeld from the years 1983 to
2019
output:
url: image_2.png
- text: a photo of dress in Chanel style by Karl Lagerfeld from the years 1983 to
2019
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - meimmo/trained-flux-lora-chanel
<Gallery />
## Model description
These are meimmo/trained-flux-lora-chanel DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of dress in Chanel style by Karl Lagerfeld from the years 1983 to 2019` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](meimmo/trained-flux-lora-chanel/tree/main) in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('meimmo/trained-flux-lora-chanel', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a photo of dress in Chanel style by Karl Lagerfeld from the years 1983 to 2019').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
ReasoningShield/ReasoningShield-1B | ReasoningShield | 2025-05-24T08:21:16Z | 17 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"safe",
"reasoning",
"safety",
"moderation",
"classifier",
"text-generation",
"en",
"dataset:ReasoningShield/ReasoningShield-Dataset",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-20T04:40:33Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
- f1
base_model:
- meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- llama
- safe
- reasoning
- safety
- moderation
- classifier
datasets:
- ReasoningShield/ReasoningShield-Dataset
---
# ๐ค Model Card for *ReasoningShield*
<div align="center">
<img src="images/ReasoningShield.svg" alt="ReasoningShield" style="width: 200px; height: auto;">
</div>
<div align="center" style="line-height: 1; ">
<!-- Page (GitHub) -->
<a href="https://github.com/CosmosYi/ReasoningShield" target="_blank" style="margin: 2px;">
<img alt="GitHub Page" src="https://img.shields.io/badge/GitHub-Page-black?logo=github " style="display: inline-block; vertical-align: middle;">
</a>
<!-- Huggingface Model -->
<a href="https://huggingface.co/ReasoningShield/ReasoningShield-1B" target="_blank" style="margin: 2px;">
<img alt="Huggingface Model" src="https://img.shields.io/badge/%F0%9F%A4%97%20Model-ReasoningShield%201B-4caf50?color=#5DCB62&logoColor=white " style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/ReasoningShield/ReasoningShield-3B" target="_blank" style="margin: 2px;">
<img alt="Huggingface Model" src="https://img.shields.io/badge/%F0%9F%A4%97%20Model-ReasoningShield%203B-4caf50?color=4caf50&logoColor=white " style="display: inline-block; vertical-align: middle;"/>
</a>
<!-- Huggingface Dataset -->
<a href="https://huggingface.co/datasets/ReasoningShield/ReasoningShield-Dataset" target="_blank" style="margin: 2px;">
<img alt="Huggingface Dataset" src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-ReasoningShield%20Dataset-ff9800?color=ff9800&logoColor=white " style="display: inline-block; vertical-align: middle;"/>
</a>
<!-- License -->
<a href="https://www.apache.org/licenses/LICENSE-2.0 " target="_blank">
<img alt="Model License" src="https://img.shields.io/badge/Model%20License-Apache_2.0-green.svg? ">
</a>
</div>
---
## ๐ก 1. Model Overview
***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs) before generating final answers. It excels in detecting harmful content that may be concealed within seemingly harmless reasoning traces, ensuring robust safety for LRMs.
- **Primary Use Case** : Detecting and mitigating hidden risks in reasoning traces of Large Reasoning Models (LRMs)
- **Key Features** :
- **High Performance**: Achieves an average F1 score exceeding **92%** in QT Moderation tasks, outperforming existing models across both in-distribution (ID) and out-of-distribution (OOD) test sets, achieving **state-of-the-art (SOTA)** performance.
- **Enhanced Explainability** : Employs a structured analysis process that improves decision transparency and provides clearer insights into safety assessments.
- **Robust Generalization** : Notably, despite being trained on our 7K QT dataset only, ***ReasoningShield*** also demonstrates competitive performance in Question-Answer (QA) moderation on traditional benchmarks, rivaling baselines trained on datasets 10 times larger, aligning with **less is more** principle.
- **Efficient Design** : Built on compact 1B/3B base models, it requires only **2.30 GB/5.98 GB** GPU memory during inference, facilitating cost-effective deployment on resource-constrained devices.
- **Base Model**: https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct & https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
---
## โ๏ธ 2. Training Details
### Training Data
<div align="center">
<img src="images/pie.png" alt="Data Composition" style="width: 100%; height: auto;">
</div>
- The model is trained on a high-quality dataset of 7,000 QT pairs, please refer to the following link for detailed information:
- ***ReasoningShield-Dataset:*** https://huggingface.co/datasets/ReasoningShield/ReasoningShield-Dataset
- **Risk Categories** :
- Violence & Physical Harm
- Hate & Toxicity
- Deception & Misinformation
- Rights-Related Risks
- Sexual Content & Exploitation
- Child-Related Harm
- Cybersecurity & Malware Threats
- Prohibited Items
- Economic Harm
- Political Risks
- Safe
- Additionally, to enhance generalization to OOD scenarios, we introduce an **Other Risks** category in the prompt.
- **Risk Levels** :
- Level 0 (Safe) : No potential for harm.
- Level 0.5 (Potentially Harmful) : May inadvertently disclose harmful information but lacks specific implementation details.
- Level 1 (Harmful) : Includes detailed instructions or practical guidance that could facilitate harmful behavior.
### Two-Stage Training
<div align="center">
<img src="images/method.png" alt="ReasoningShield Workflow" style="width: 100%; height: auto;">
</div>
#### Stage 1: Full-parameter Fine-tuning
- **Objective** : Initial alignment with agreed-on samples to generate structured analyses and judgment.
- **Dataset Size** : 4,358 agreed-on samples.
- **Batch Size** : 2
- **Gradient Accumulation Steps** : 8
- **Epochs** : 3
- **Precision** : bf16
#### Stage 2: Direct Preference Optimization Training
- **Objective** : Refining the model's performance on hard negative samples constructed from the ambiguous case and enhancing its robustness against adversarial scenarios.
- **Dataset Size** : 2,642 hard negative samples.
- **Batch Size** : 2
- **Gradient Accumulation Steps** : 8
- **Epochs** : 2
- **Precision** : bf16
These two-stage training procedures significantly enhance ***ReasoningShield's*** robustness and improve its ability to detect hidden risks in reasoning traces more effectively.
---
## ๐ 3. Performance Evaluation
We evaluate ***ReasoningShield*** and baselines on four diverse test sets (AIR-Bench , SALAD-Bench , BeaverTails , Jailbreak-Bench) in **QT Moderation**. <strong>Bold</strong> indicates the best results and <ins>underline</ins> represents the second best ones. The results are averaged over five runs conducted on four datasets, and the performance comparison of some models are reported below:
<div align="center">
| **Model** | **Size** | **Accuracy (โ)** | **Precision (โ)** | **Recall (โ)** | **F1 (โ)** |
| :-----------------------: | :--------: | :----------------: | :----------------: | :--------------: | :-----------: |
| Perspective | - | 39.4 | 0.0 | 0.0 | 0.0 |
| OpenAI Moderation | - | 59.2 | 71.4 | 54.0 | 61.5 |
| LlamaGuard-3-1B | 1B | 71.4 | 87.2 | 61.7 | 72.3 |
| LlamaGuard-3-8B | 8B | 74.1 | <ins>93.7</ins> | 61.2 | 74.0 |
| LlamaGuard-4 | 12B | 62.1 | 91.4 | 41.0 | 56.7 |
| Aegis-Permissive | 7B | 59.6 | 67.0 | 64.9 | 66.0 |
| Aegis-Defensive | 7B | 62.9 | 64.6 | 85.4 | 73.5 |
| WildGuard | 7B | 68.1 | **99.4** | 47.4 | 64.2 |
| MD-Judge | 7B | 79.1 | 86.9 | 76.9 | 81.6 |
| Beaver-Dam | 7B | 62.6 | 78.4 | 52.5 | 62.9 |
| **ReasoningShield (Ours)** | 1B | <ins>88.6</ins> | 89.9 | <ins>91.3</ins>| <ins>90.6</ins> |
| **ReasoningShield (Ours)** | 3B | **90.5** | 91.1 | **93.4** | **92.2** |
</div>
Additionally, ***ReasoningShield*** exhibits strong generalization in traditional QA Moderation, even though it is trained on a QT pairs dataset of just 7K samples. Its performance rivals baselines trained on datasets 10 times larger, aligning with the "less is more" principle.
<div align="center">
<img src="images/bar.png" alt="QT and QA Performance" style="width: 100%; height: auto;">
</div>
---
## ๐งช 4. How to Use
### Inference with `transformers` Library
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = 'ReasoningShield/ReasoningShield-3B'
device = 'auto'
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map=device
)
reasoningshield_prompt = """
# Task Description
You are a Risk Auditor evaluating the safety of AI's responses to user queries. Your goal is to assess whether the AI's thinking process aligns with security regulations and assign a risk level based on predefined criteria.
...
"""
question = "Your input query here"
model_thinking = "Intermediate reasoning steps here" # The content can be replaced with "answer" based on actual requirements
messages = [
{"role": "system", "content": reasoningshield_prompt},
{"role": "user", "content": f"Query: {question}\nThought: {model_thinking}"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
input = tokenizer(prompt, return_tensors="pt", padding=True).to(device)
with torch.no_grad():
output = model.generate(
**input,
max_new_tokens=1024,
pad_token_id=tokenizer.eos_token_id
)
full_decoded = tokenizer.batch_decode(output, skip_special_tokens=True)
prompt_only = tokenizer.batch_decode(input.input_ids, skip_special_tokens=True)
responses = [full[len(prompt):] for full, prompt in zip(full_decoded, prompt_only)]
print(responses[0])
```
### Inference with `vLLM` for Inference
๐ **Tips:** To achieve higher efficiency and better user experience, we highly recommend using vLLM for inference.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_name = 'ReasoningShield/ReasoningShield-3B'
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
llm = LLM(model=model_name, tokenizer=model_name, dtype="bfloat16")
reasoningshield_prompt = """
# Task Description
You are a Risk Auditor evaluating the safety of AI's responses to user queries. Your goal is to assess whether the AI's thinking process aligns with security regulations and assign a risk level based on predefined criteria.
...
"""
question = "Your input query here"
model_thinking = "Intermediate reasoning steps here" # The content can be replaced with "answer" based on actual requirements
messages = [
{"role": "system", "content": reasoningshield_prompt},
{"role": "user", "content": f"Query: {question}\nThought: {model_thinking}"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
sampling_params = SamplingParams(
max_tokens=1024,
stop=[tokenizer.eos_token],
)
outputs = llm.generate(prompt, sampling_params)
responses = [output.outputs[0].text.strip() for output in outputs]
print(responses[0])
```
---
## ๐ 5. License
This model is released under the **Apache License 2.0**. See the [LICENSE ](https://choosealicense.com/licenses/apache-2.0/)file for details. |
mlx-community/AceReason-Nemotron-14B-bf16 | mlx-community | 2025-05-24T08:09:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:finetune:nvidia/AceReason-Nemotron-14B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T08:08:00Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- mlx
- mlx-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# mlx-community/AceReason-Nemotron-14B-bf16
The Model [mlx-community/AceReason-Nemotron-14B-bf16](https://huggingface.co/mlx-community/AceReason-Nemotron-14B-bf16) was converted to MLX format from [nvidia/AceReason-Nemotron-14B](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/AceReason-Nemotron-14B-bf16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
manohar-lal-dhakad-mms/manohar.lal.dhakad.mms.manohar.lal.dhakad.viral.video | manohar-lal-dhakad-mms | 2025-05-24T08:08:29Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-24T08:07:08Z | <a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/?mm"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/?mm">๐ Viral Video Original Full HD๐ข==โบโบ WATCH NOW</a>
<a rel="nofollow" href="https://iccnews.xyz/leaked?mm">๐ด CLICK HERE ๐==โบโบ Download Now)</a> |
CaraJ/ORM-T2I-R1 | CaraJ | 2025-05-24T08:07:31Z | 47 | 1 | transformers | [
"transformers",
"safetensors",
"llava_qwen",
"text-generation",
"image-text-to-text",
"conversational",
"arxiv:2505.00703",
"base_model:lmms-lab/llava-onevision-qwen2-7b-ov",
"base_model:finetune:lmms-lab/llava-onevision-qwen2-7b-ov",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-06T09:35:21Z | ---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- lmms-lab/llava-onevision-qwen2-7b-ov
---
This is the output reward model (ORM) used in [T2I-R1](https://github.com/CaraJ7/T2I-R1).
This model is fine-tuned from [lmms-lab/llava-onevision-qwen2-7b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov).
Please check our paper: "[T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT](https://arxiv.org/pdf/2505.00703)" and [GitHub](https://github.com/CaraJ7/T2I-R1) for more information.
|
mci29/sn29_s2m0_hnac | mci29 | 2025-05-24T08:05:07Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-24T08:01:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/AceReason-Nemotron-14B-8bit | mlx-community | 2025-05-24T08:04:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:quantized:nvidia/AceReason-Nemotron-14B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-05-24T08:03:01Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- mlx
- mlx-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# mlx-community/AceReason-Nemotron-14B-8bit
The Model [mlx-community/AceReason-Nemotron-14B-8bit](https://huggingface.co/mlx-community/AceReason-Nemotron-14B-8bit) was converted to MLX format from [nvidia/AceReason-Nemotron-14B](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/AceReason-Nemotron-14B-8bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
FizzyMango/whisper_szokz | FizzyMango | 2025-05-24T07:56:45Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T07:53:38Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
flux-lora/flux-ghibsky-illustration-v1 | flux-lora | 2025-05-24T06:26:14Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"image-generation",
"flux",
"replicate",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-24T06:26:02Z | ---
tags:
- text-to-image
- diffusers
- lora
- template:sd-lora
- image-generation
- flux
- replicate
pipeline_tag: text-to-image
thumbnail: >-
https://tjzk.replicate.delivery/models_models_cover_image/e5bc70de-c6ae-497f-bf2c-7e81b1183f05/out-0.jpg
widget:
- text: >-
GHIBSKY style, a cat on a windowsill gazing out at a starry night sky and
distant city lights
output:
url: images/example1.jpg
- text: >-
GHIBSKY style, a fisherman casting a line into a peaceful village lake
surrounded by quaint cottages
output:
url: images/example2.jpg
- text: >-
GHIBSKY style, cozy mountain cabin covered in snow, with smoke curling from
the chimney and a warm, inviting light spilling through the windows
output:
url: images/example3.jpg
- text: GHIBSKY style, Mykonos
output:
url: images/example4.jpg
- text: >-
GHIBSKY style, an orange Lamborghini driving down a hill road at night with
a beautiful ocean view in the background, side view, no text
output:
url: images/example5.jpg
- text: >-
GHIBSKY style, a small Yorkie on a windowsill during a snowy winter night,
with a warm, cozy glow from inside and soft snowflakes drifting outside
output:
url: images/example6.jpg
- text: >-
GHIBSKY style, serene Japanese garden with a koi pond and a traditional tea
house, nestled under a canopy of cherry blossoms in full bloom
output:
url: images/example7.jpg
- text: GHIBSKY style, the most beautiful place in the universe
output:
url: images/example8.jpg
- text: GHIBSKY style painting, sign saying "Flux Ghibsky"
output:
url: images/example_dj4xgd39e.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: GHIBSKY style
license: other
license_name: flux-dev-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Flux Ghibsky Illustration: Create Serene and Enchanting Landscapes
<Gallery />
## Model Description
The Flux Ghibsky Illustration model generates landscapes that blend serene, surreal skies with intricate, Ghibli-inspired details. This fusion of styles creates enchanting scenes that capture the essence of both Ghibli's whimsical charm and Makoto Shinkai's atmospheric beauty. Perfect for creating dreamy visuals. You can also run the model on Replicate. Feedback is welcome!
[Replicate Model Page](https://replicate.com/aleksa-codes/flux-ghibsky-illustration)
## Trigger Words
Use `GHIBSKY style` to invoke the modelโs unique aesthetic. Itโs best to start your prompt with the trigger word, followed by descriptions of your scene, such as nature, skies, houses, roads, villages, etc.
If you are getting too realistic images, try adding `painting` to your prompt, for example: `GHIBSKY style painting`.
## Training Details
- **Trained Using**: [Flux LoRA Fast Training on fal.ai](https://fal.ai/models/fal-ai/flux-lora-fast-training) and [Flux LoRA Trainer on Replicate](https://replicate.com/ostris/flux-dev-lora-trainer/train)
- **Number of Images**: 35
- **Trigger Word**: `GHIBSKY`
- **Auto-captioning**: Enabled
- **Auto-captioning Prefix**: `""`
- **Auto-captioning Suffix**: `", GHIBSKY style"`
- **Training Steps**: 1000
- **Learning Rate**: 0.0004
- **Batch Size**: 1
- **LoRA Rank**: 16
## Download Model
[Download the *.safetensors LoRA](https://huggingface.co/aleksa-codes/flux-ghibsky-illustration/tree/main) in the Files & versions tab.
# Related Tools
If you're training your own LoRA model and need a replacement for LLaVA auto captioning that some LoRA training apps use, try [GPT Image Captioner](https://gptcaptioner.aleksa.codes/), an open-source tool I created that generates AI-powered descriptions for images. This tool streamlines the auto-captioning process by providing a downloadable zip file with caption .txt files that match your image filenames. It integrates seamlessly with platforms like [fal LoRA Trainer](https://fal.ai/models/fal-ai/flux-lora-fast-training) and [Replicate LoRA Trainer](https://replicate.com/ostris/flux-dev-lora-trainer/train).
The tool now supports Ollama for local inference in addition to OpenAI models, which require your own API key. You can use it as a web app or clone/fork the repository to run it locally. For Ollama integration with the web version, you may need to set up a tunnel like ngrok or allow additional web origins. More information can be found in the project's README.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```python
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('aleksa-codes/flux-ghibsky-illustration', weight_name='lora.safetensors')
image = pipeline('GHIBSKY style, a serene lakeside village with colorful houses and towering mountains under a dreamy sky').images[0]
```
For more details, including weighting, merging, and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters).
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). |
Jialuo21/results | Jialuo21 | 2025-05-24T06:25:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-24T04:12:54Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4185
- Accuracy: 0.7551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4891 | 1.0 | 810 | 0.5060 | 0.7054 |
| 0.4271 | 2.0 | 1620 | 0.4423 | 0.7433 |
| 0.4216 | 2.9969 | 2427 | 0.4185 | 0.7551 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
mradermacher/yt_videos_comments-GGUF | mradermacher | 2025-05-24T06:24:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:alexbuyan/yt_videos_comments",
"base_model:quantized:alexbuyan/yt_videos_comments",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T19:04:39Z | ---
base_model: alexbuyan/yt_videos_comments
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/alexbuyan/yt_videos_comments
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/yt_videos_comments-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q4_K_S.gguf) | Q4_K_S | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q3_K_L.gguf) | Q3_K_L | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q5_K_S.gguf) | Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q5_K_M.gguf) | Q5_K_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q6_K.gguf) | Q6_K | 0.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.Q8_0.gguf) | Q8_0 | 0.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/yt_videos_comments-GGUF/resolve/main/yt_videos_comments.f16.gguf) | f16 | 1.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/my_awesome_eli5_clm-model-i1-GGUF | mradermacher | 2025-05-24T06:24:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:stevhliu/my_awesome_eli5_clm-model",
"base_model:quantized:stevhliu/my_awesome_eli5_clm-model",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-05-24T06:15:42Z | ---
base_model: stevhliu/my_awesome_eli5_clm-model
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/stevhliu/my_awesome_eli5_clm-model
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/my_awesome_eli5_clm-model-i1-GGUF/resolve/main/my_awesome_eli5_clm-model.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kuchikihater/swim-skin-cancer | kuchikihater | 2025-05-24T06:23:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-base-patch4-window7-224",
"base_model:finetune:microsoft/swin-base-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-24T06:22:46Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-base-patch4-window7-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-data-augmentation-balanced-base-beans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-data-augmentation-balanced-base-beans
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the HAM1000 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5919
- Accuracy: 0.8158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
TOMFORD79/Zombie_7 | TOMFORD79 | 2025-05-24T06:21:41Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-24T03:59:24Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn29_cold_2305_2 | LandCruiser | 2025-05-24T06:20:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T07:27:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF | mradermacher | 2025-05-24T06:19:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Ar4ikov/gpt2-pt-2-stable-diffusion-prompt-generator",
"base_model:quantized:Ar4ikov/gpt2-pt-2-stable-diffusion-prompt-generator",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-05-24T06:10:52Z | ---
base_model: Ar4ikov/gpt2-pt-2-stable-diffusion-prompt-generator
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Ar4ikov/gpt2-pt-2-stable-diffusion-prompt-generator
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-pt-2-stable-diffusion-prompt-generator-i1-GGUF/resolve/main/gpt2-pt-2-stable-diffusion-prompt-generator.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits