modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-28 21:29:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 455
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-28 21:27:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Tri0315/Triyatno | Tri0315 | 2025-05-21T11:48:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T11:48:58Z | ---
license: apache-2.0
---
|
jmalejandrob79/nrmmtrfckd5k | jmalejandrob79 | 2025-05-21T11:48:09Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-21T09:40:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfckd5k
---
# Nrmmtrfckd5K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfckd5k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfckd5k",
"lora_weights": "https://huggingface.co/jmalejandrob79/nrmmtrfckd5k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/nrmmtrfckd5k', weight_name='lora.safetensors')
image = pipeline('nrmmtrfckd5k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 5000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jmalejandrob79/nrmmtrfckd5k/discussions) to add images that show off what you’ve made with this LoRA.
|
Slamalama/gpt2-medium_nf4 | Slamalama | 2025-05-21T11:47:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-21T11:46:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FlockJewels/ppo-LunarLander-v2 | FlockJewels | 2025-05-21T11:46:08Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-21T11:45:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.49 +/- 22.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MinaMila/llama_instbase_unlearned_ug_e-6_1.0_0.25_0.5_ep3_LoRa_Adult_cfda_ep5_22 | MinaMila | 2025-05-21T11:42:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T11:42:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik2987/41797499-5343-4d0a-a123-4063fcd94dc4 | dimasik2987 | 2025-05-21T11:42:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-21T11:06:03Z | ---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 41797499-5343-4d0a-a123-4063fcd94dc4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- f9b4a2d57d334137_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: post_text
field_instruction: post_title
field_output: comment_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dimasik2987/41797499-5343-4d0a-a123-4063fcd94dc4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/f9b4a2d57d334137_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 84fc5cc8-a9e3-4ad1-9254-b65c97b9fa66
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 84fc5cc8-a9e3-4ad1-9254-b65c97b9fa66
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 41797499-5343-4d0a-a123-4063fcd94dc4
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.994 | 0.0001 | 1 | 2.9199 |
| 3.0153 | 0.0307 | 250 | 2.6558 |
| 2.4294 | 0.0613 | 500 | 2.6278 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vamcrizer/model_test | vamcrizer | 2025-05-21T11:40:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T11:36:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danthepol/MNLP_M2_document_encoder | danthepol | 2025-05-21T11:39:33Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10481",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-21T11:38:49Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10481
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: What is a layer of saturated porous rock?
sentences:
- Current requires a source of voltage, which is a difference in electric potential
energy. Sources of voltage include chemical cells and solar cells.
- Give examples of energy conversions between potential and kinetic energy.
- An aquifer is a layer of saturated porous rock. It lies below the water table.
An impermeable layer, such as clay, is below the aquifer.
- source_sentence: What happens to gas solubility as the temperature increases?
sentences:
- A nonrenewable resource is one that cannot be replaced as easily as it is consumed.
Fossil fuels are an example of nonrenewable resources. They take millions of years
to form naturally, and so they cannot be replaced as fast as they are consumed.
To take the place of fossil fuel use, alternative energy resources are being developed.
These alternative energy sources often utilize renewable resources. The following
are examples of sustainable alternative energy resources:.
- Gas solubility decreases as the temperature increases.
- An electrolytic cell is the apparatus used for carrying out an electrolysis reaction.
In an electrolytic cell, electric current is applied to provide a source of electrons
for driving the reaction in a nonspontaneous direction. In a voltaic cell, the
reaction goes in a direction that releases electrons spontaneously. In an electrolytic
cell, the input of electrons from an external source forces the reaction to go
in the opposite direction.
- source_sentence: The sun and many other light sources produce waves that are randomly
this?
sentences:
- The Sun and many other light sources produce waves that are randomly polarized
(see Figure 27.39). Such light is said to be unpolarized because it is composed
of many waves with all possible directions of polarization. Polaroid materials,
invented by the founder of Polaroid Corporation, Edwin Land, act as a polarizing
slit for light, allowing only polarization in one direction to pass through. Polarizing
filters are composed of long molecules aligned in one direction. Thinking of the
molecules as many slits, analogous to those for the oscillating ropes, we can
understand why only light with a specific polarization can get through. The axis
of a polarizing filter is the direction along which the filter passes the electric
field of an EM wave (see Figure 27.40).
- When you look at the Moon from Earth, you notice dark and light areas. The maria
are dark, solid, flat areas of lava. Maria covers around 16% of the Moon’s surface,
mostly on the near side. The maria formed about 3.0 to 3.5 billion years ago,
when the Moon was continually bombarded by meteorites ( Figure below ). Large
meteorites broke through the Moon’s newly formed surface. This caused magma to
flow out and fill the craters. Scientists estimate volcanic activity on the Moon
ended about 1.2 billion years ago.
- The structures of the human eye collect and focus light. They form a reduced,
upside-down image on the retina at the back of the eye.
- source_sentence: The combined gradient that affects an ion includes its concentration
gradient and its what?
sentences:
- '5.3 Active Transport The combined gradient that affects an ion includes its concentration
gradient and its electrical gradient. A positive ion, for example, might tend
to diffuse into a new area, down its concentration gradient, but if it is diffusing
into an area of net positive charge, its diffusion will be hampered by its electrical
gradient. When dealing with ions in aqueous solutions, a combination of the electrochemical
and concentration gradients, rather than just the concentration gradient alone,
must be considered. Living cells need certain substances that exist inside the
cell in concentrations greater than they exist in the extracellular space. Moving
substances up their electrochemical gradients requires energy from the cell. Active
transport uses energy stored in ATP to fuel this transport. Active transport of
small molecular-sized materials uses integral proteins in the cell membrane to
move the materials: These proteins are analogous to pumps. Some pumps, which carry
out primary active transport, couple directly with ATP to drive their action.
In co-transport (or secondary active transport), energy from primary transport
can be used to move another substance into the cell and up its concentration gradient.'
- The development of new technology is called technological design . It is similar
to scientific investigation. Both processes use evidence and logic to solve problems.
- Oceans cover more than 70 percent of Earth's surface and hold 97 percent of its
surface water. It’s no surprise that the oceans have a big influence on the planet.
The oceans affect the atmosphere, climate, and living things.
- source_sentence: What are are segmented invertebrates in phylum annelida called?
sentences:
- Simple Model of DNA. In this simple model of DNA, each line represents a nucleotide
chain. The double helix shape forms when the two chains wrap around the same axis.
- '38.2 Bone Bone, or osseous tissue, is connective tissue that includes specialized
cells, mineral salts, and collagen fibers. The human skeleton can be divided into
long bones, short bones, flat bones, and irregular bones. Compact bone tissue
is composed of osteons and forms the external layer of all bones. Spongy bone
tissue is composed of trabeculae and forms the inner part of all bones. Four types
of cells compose bony tissue: osteocytes, osteoclasts, osteoprogenitor cells,
and osteoblasts. Ossification is the process of bone formation by osteoblasts.
Intramembranous ossification is the process of bone development from fibrous membranes.
Endochondral ossification is the process of bone development from hyaline cartilage.
Long bones lengthen as chondrocytes divide and secrete hyaline cartilage. Osteoblasts
replace cartilage with bone. Appositional growth is the increase in the diameter
of bones by the addition of bone tissue at the surface of bones. Bone remodeling
involves the processes of bone deposition by osteoblasts and bone resorption by
osteoclasts. Bone repair occurs in four stages and can take several months.'
- Annelids are segmented invertebrates in Phylum Annelida. They include earthworms,
polychaete worms, and leeches. Annelids have a coelom and several organ systems.
Their body segments may have a variety of different structures such as tentacles
or suckers. Annelids may be predators, parasites, filter feeders, or decomposers.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What are are segmented invertebrates in phylum annelida called?',
'Annelids are segmented invertebrates in Phylum Annelida. They include earthworms, polychaete worms, and leeches. Annelids have a coelom and several organ systems. Their body segments may have a variety of different structures such as tentacles or suckers. Annelids may be predators, parasites, filter feeders, or decomposers.',
'38.2 Bone Bone, or osseous tissue, is connective tissue that includes specialized cells, mineral salts, and collagen fibers. The human skeleton can be divided into long bones, short bones, flat bones, and irregular bones. Compact bone tissue is composed of osteons and forms the external layer of all bones. Spongy bone tissue is composed of trabeculae and forms the inner part of all bones. Four types of cells compose bony tissue: osteocytes, osteoclasts, osteoprogenitor cells, and osteoblasts. Ossification is the process of bone formation by osteoblasts. Intramembranous ossification is the process of bone development from fibrous membranes. Endochondral ossification is the process of bone development from hyaline cartilage. Long bones lengthen as chondrocytes divide and secrete hyaline cartilage. Osteoblasts replace cartilage with bone. Appositional growth is the increase in the diameter of bones by the addition of bone tissue at the surface of bones. Bone remodeling involves the processes of bone deposition by osteoblasts and bone resorption by osteoclasts. Bone repair occurs in four stages and can take several months.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,481 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 17.94 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 100.79 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Vitamin d is made in the skin when it is exposed to what?</code> | <code>Vitamins are organic compounds that the body needs in small amounts to function properly. Humans need 16 different vitamins. Six of them are listed in Table below . Vitamin D is made in the skin when it is exposed to sunlight. Bacteria that normally live in the gut make vitamins B12 and K. All other vitamins must come from food. The table shows good food sources of the vitamins.</code> |
| <code>What is the process of the blastula forming 3 layers of cells called?</code> | <code>Gastrulation The typical blastula is a ball of cells. The next stage in embryonic development is the formation of the body plan. The cells in the blastula rearrange themselves spatially to form three layers of cells. This process is called gastrulation. During gastrulation, the blastula folds upon itself to form the three layers of cells. Each of these layers is called a germ layer and each germ layer differentiates into different organ systems. The three germs layers, shown in Figure 43.26, are the endoderm, the ectoderm, and the mesoderm. The ectoderm gives rise to the nervous system and the epidermis. The mesoderm gives rise to the muscle cells and connective tissue in the body. The endoderm gives rise to columnar cells found in the digestive system and many internal organs.</code> |
| <code>Microscopes were first developed in the early 1600s by this trade?</code> | <code>Microscopes were first developed in the early 1600s by eyeglass makers in The Netherlands and Denmark. The simplest compound microscope is constructed from two convex lenses as shown schematically in Figure 26.16. The first lens is called the objective lens, and has typical magnification values from 5× to 100× . In standard microscopes, the objectives are mounted such that when you switch between objectives, the sample remains in focus. Objectives arranged in this way are described as parfocal. The second, the eyepiece, also referred to as the ocular, has several lenses which slide inside a cylindrical barrel. The focusing ability is provided by the movement of both the objective lens and the eyepiece. The purpose of a microscope is to magnify small objects, and both lenses contribute to the final magnification. Additionally, the final enlarged image is produced in a location far enough from the observer to be easily viewed, since the eye cannot focus on objects or images that are too ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.3814 | 500 | 0.0735 |
| 0.7628 | 1000 | 0.0541 |
| 1.1442 | 1500 | 0.0422 |
| 1.5256 | 2000 | 0.0198 |
| 1.9069 | 2500 | 0.0241 |
| 2.2883 | 3000 | 0.0127 |
| 2.6697 | 3500 | 0.0084 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.52.1
- PyTorch: 2.1.0+cu118
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Mira123hs/distilbert-base-uncased-finetuned-emotion | Mira123hs | 2025-05-21T11:39:31Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T10:52:00Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1511
- Accuracy: 0.9395
- F1: 0.9395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1148 | 1.0 | 250 | 0.1755 | 0.9315 | 0.9319 |
| 0.1103 | 2.0 | 500 | 0.1511 | 0.9395 | 0.9395 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
zozrzoornoch/vxv | zozrzoornoch | 2025-05-21T11:39:17Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-21T11:39:17Z | ---
license: creativeml-openrail-m
---
|
nazrinburz/distilbert-base-uncased-finetuned-emotion | nazrinburz | 2025-05-21T11:39:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T10:51:31Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2245
- Accuracy: 0.923
- F1: 0.9228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8663 | 1.0 | 250 | 0.3327 | 0.905 | 0.9042 |
| 0.2673 | 2.0 | 500 | 0.2245 | 0.923 | 0.9228 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
D1zzYzz/alpaca-1p | D1zzYzz | 2025-05-21T11:38:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T11:32:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Stuti103/healthtrip-llama3-medmodel-v2_part9 | Stuti103 | 2025-05-21T11:38:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T11:38:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
idontknowitsbrad/MaskSymmetryWork | idontknowitsbrad | 2025-05-21T11:38:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T08:39:25Z | ---
license: apache-2.0
---
best Mask2Former weights using the Jersey Royal Dataset
best MRCNN model weights using the Jersey Royal Dataset
Both Models are implemented in detectron2. Models should load into Mask R-CNN and Mask2Former implementations in detectron2
Test Results:
Mask2Fromer: mAP 0.591, mAP50 0.832, mAP@75 0.617
MaskR-CNN: mAP 0.803, mAP50 0.917, mAP@75 0.845
|
yamatazen/LinearWriter-12B | yamatazen | 2025-05-21T11:38:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"linear",
"conversational",
"en",
"ja",
"arxiv:2203.05482",
"base_model:Elizezen/Himeyuri-v0.1-12B",
"base_model:merge:Elizezen/Himeyuri-v0.1-12B",
"base_model:natong19/Mistral-Nemo-Instruct-2407-abliterated",
"base_model:merge:natong19/Mistral-Nemo-Instruct-2407-abliterated",
"base_model:nbeerbower/mistral-nemo-gutenberg-12B-v4",
"base_model:merge:nbeerbower/mistral-nemo-gutenberg-12B-v4",
"base_model:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:merge:shisa-ai/shisa-v2-mistral-nemo-12b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T10:30:20Z | ---
base_model:
- shisa-ai/shisa-v2-mistral-nemo-12b
- nbeerbower/mistral-nemo-gutenberg-12B-v4
- natong19/Mistral-Nemo-Instruct-2407-abliterated
- Elizezen/Himeyuri-v0.1-12B
library_name: transformers
tags:
- mergekit
- merge
- linear
language:
- en
- ja
---

# LinearWriter-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [shisa-ai/shisa-v2-mistral-nemo-12b](https://huggingface.co/shisa-ai/shisa-v2-mistral-nemo-12b)
* [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4)
* [natong19/Mistral-Nemo-Instruct-2407-abliterated](https://huggingface.co/natong19/Mistral-Nemo-Instruct-2407-abliterated)
* [Elizezen/Himeyuri-v0.1-12B](https://huggingface.co/Elizezen/Himeyuri-v0.1-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
dtype: bfloat16
out_dtype: bfloat16
models:
- model: natong19/Mistral-Nemo-Instruct-2407-abliterated # Uncensor
parameters:
weight: 1.0
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4 # Writing
parameters:
weight: [0.25, 0.3, 0.5, 0.6, 0.75]
- model: Elizezen/Himeyuri-v0.1-12B # Japanese
parameters:
weight: [0.25, 0.3, 0.6, 0.3, 0.25]
- model: shisa-ai/shisa-v2-mistral-nemo-12b # Japanese
parameters:
weight: [0.25, 0.3, 0.5, 0.3, 0.25]
``` |
markus13/ENGORILE | markus13 | 2025-05-21T11:37:53Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-21T10:45:13Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ENGORILE
---
# Engorile
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ENGORILE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ENGORILE",
"lora_weights": "https://huggingface.co/markus13/ENGORILE/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('markus13/ENGORILE', weight_name='lora.safetensors')
image = pipeline('ENGORILE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/markus13/ENGORILE/discussions) to add images that show off what you’ve made with this LoRA.
|
syr99/initial_noise_loader | syr99 | 2025-05-21T11:36:32Z | 0 | 0 | null | [
"arxiv:2411.09502",
"region:us"
] | null | 2025-05-20T20:01:29Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This custom pipeline integrates the work of [Golden Noise for Diffusion Models: A Learning Framework](https://arxiv.org/pdf/2411.09502).
NPNet uses a network that acts as a noise prompt. It aims to transform the random Gaussian noise into golden noise, by adding a small desirable perturbation derived from the text prompt to boost the overall quality and semantic faithfulness of the generated images.
This approach adds a boost in metrics such as PickScore, HPSv2, Aesthetic Score and ImageReward with very limited additional inference and computational costs, as it starts inference from
a golden noise instead of random gaussian noise. For more information, check the paper.
This pipeline uses the model SDXL and generates images of 1024x1024 resolution.
## Requirements
```pip install timm einops```
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
```
custom_pipeline = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
variant="fp16",
torch_dtype=torch.float16,
use_safetensors=True,
custom_pipeline="syr99/initial_noise_loader"
).to(device)
```
The class `NoiseLoaderMixin` can serve for other methods to initialize differently the noise at the first timestep.
## Citation
Code adapted from [Github](https://github.com/xie-lab-ml/Golden-Noise-for-Diffusion-Models)
```
@misc{zhou2024goldennoisediffusionmodels,
title={Golden Noise for Diffusion Models: A Learning Framework},
author={Zikai Zhou and Shitong Shao and Lichen Bai and Zhiqiang Xu and Bo Han and Zeke Xie},
year={2024},
eprint={2411.09502},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2411.09502},
}
```
|
xw17/Llama-3.2-1B-Instruct_finetuned_3_optimized1_task_grouping_off_FT | xw17 | 2025-05-21T11:33:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T11:31:44Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fkuzmichev/fbkzm-style | fkuzmichev | 2025-05-21T11:32:46Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-21T11:32:38Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: fbkzm
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# fbkzm style
<Gallery />
## Model description
## Trigger words
You should use `fbkzm` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/fkuzmichev/fbkzm-style/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
masmatix/Llama-3.1-Nemotron-Nano-4B-v1.1-Q4_K_M-GGUF | masmatix | 2025-05-21T11:31:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-21T11:31:36Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
base_model: nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1
datasets:
- nvidia/Llama-Nemotron-Post-Training-Dataset
---
# masmatix/Llama-3.1-Nemotron-Nano-4B-v1.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo masmatix/Llama-3.1-Nemotron-Nano-4B-v1.1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-4b-v1.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo masmatix/Llama-3.1-Nemotron-Nano-4B-v1.1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-4b-v1.1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo masmatix/Llama-3.1-Nemotron-Nano-4B-v1.1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-4b-v1.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo masmatix/Llama-3.1-Nemotron-Nano-4B-v1.1-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-4b-v1.1-q4_k_m.gguf -c 2048
```
|
spencer21/dreamOINT8 | spencer21 | 2025-05-21T11:31:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T11:01:53Z | ---
license: apache-2.0
---
|
mohhtl/193e0f51-3ab4-48bc-a11c-0a5affdeb275 | mohhtl | 2025-05-21T11:30:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:train.json",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
] | null | 2025-05-21T10:51:20Z | ---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- generated_from_trainer
datasets:
- train.json
model-index:
- name: 193e0f51-3ab4-48bc-a11c-0a5affdeb275
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: auto
dataset_prepared_path: 84fc5cc8-a9e3-4ad1-9254-b65c97b9fa66_last_run_prepared
datasets:
- path: train.json
type:
field: null
field_input: post_text
field_instruction: post_title
field_output: comment_text
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
flash_attention: true
gradient_accumulation_steps: 4
gradient_checkpointing: true
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_model_dir: null
lora_modules_to_save:
- embed_tokens
- lm_head
lora_r: 32
lora_target_linear: true
lr_scheduler: constant
micro_batch_size: 2
model_type: LlamaForCausalLM
num_epochs: 15
optimizer: adamw_bnb_8bit
output_dir: 193e0f51-3ab4-48bc-a11c-0a5affdeb275
pad_to_sequence_len: true
resume_from_checkpoint: null
sample_packing: true
save_epochs: 1
save_strategy: 'no'
save_total_limit: 1
saves_per_epoch: 1
sequence_len: 4096
special_tokens:
pad_token: <|end_of_text|>
tf32: false
tokenizer_type: AutoTokenizer
val_set_size: 0.0
wandb_entity: null
wandb_log_model: null
wandb_name: null
wandb_project: null
wandb_watch: null
warmup_ratio: 0.0
warmup_steps: 0
weight_decay: 0.0
```
</details><br>
# 193e0f51-3ab4-48bc-a11c-0a5affdeb275
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the train.json dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.4.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
WWizrd/WillysNoobRealism | WWizrd | 2025-05-21T11:29:15Z | 0 | 1 | null | [
"license:other",
"region:us"
] | null | 2025-01-15T10:45:24Z | ---
license: other
license_name: fair-ai-public-license-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
---
# Images are from v1. Image examples for v2, HD v1, HD v2, and HD v3 are available on CivitAI!
https://civitai.com/models/1143097/willys-noob-realism
<img src="https://cdn-uploads.huggingface.co/production/uploads/6411f2531e42164b9f0facef/Y6LE4vwpqYu42n0kFBM-w.png" width="512" height="640">
This model and its version(s) were merged for the sole purpose of finally having a decent out of the box consistent Realism model based on Noob.
## Recommended Settings:
TL:DR Its very simple, just prompt however you usually would for noob, but add "realistic", and "photorealism" after your quality tags.
I suggest using DPM++ 2m with Simple, or Beta scheduler at a CFG of 5-7 and steps of 20-50.
Here's a positive prompt example:
`<Quality tags>, realistic, photorealism, <character tags>, <pose tags>, <background tags>`
And here's a filled in example:
`masterpiece, best quality, highres, absurdres, (realistic, photorealistic), 1girl, solo, anthro, wolf, grey fur, posing, bed room`
And here is my recommended negative prompt:
`worst quality, low quality, normal quality, monochrome, bad anatomy, ugly, (illustration, cartoon, anime), flat color, outline, smooth skin, blurry,`
These are my personal recommendations, but feel free to experiment and use what ever settings you feel works for you!
## Other information:
This model was merged with Furries in mind, and I have not extensively tested the merge with human characters. I have however extensively tested it with all kinds of different unique species, and its compatibility with Noob loras and from my testing, most species, character, and other related loras will work perfectly fine with this model merge.
I plan on making better versions of this over time as more resources come out, or if I have time to make some realism or other loras to fortify it.
If this model has any weaknesses, its that it tends to hallucinate in the backgrounds quite frequently, and I have yet to find any real good ways to upscale its images without them losing a small bit of quality. |
xw17/Llama-3.2-1B-Instruct_finetuned_2_optimized1_task_grouping_off_FT | xw17 | 2025-05-21T11:27:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T11:25:44Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
simon-muenker/TWON-Agent-OSN-Replies-de | simon-muenker | 2025-05-21T11:27:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-01-14T09:27:28Z | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: TWON-Agent-OSN-Replies-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TWON-Agent-OSN-Replies-de
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 2.5602 | 0.0423 | 200 | 2.3015 |
| 2.1761 | 0.0845 | 400 | 2.0341 |
| 1.8998 | 0.1268 | 600 | 1.7808 |
| 1.6962 | 0.1691 | 800 | 1.6139 |
| 1.5494 | 0.2114 | 1000 | 1.4910 |
| 1.4478 | 0.2536 | 1200 | 1.4047 |
| 1.3865 | 0.2959 | 1400 | 1.3415 |
| 1.3407 | 0.3382 | 1600 | 1.2891 |
| 1.2808 | 0.3805 | 1800 | 1.2429 |
| 1.2365 | 0.4227 | 2000 | 1.2054 |
| 1.2058 | 0.4650 | 2200 | 1.1766 |
| 1.1799 | 0.5073 | 2400 | 1.1484 |
| 1.147 | 0.5496 | 2600 | 1.1282 |
| 1.1235 | 0.5918 | 2800 | 1.1087 |
| 1.1228 | 0.6341 | 3000 | 1.0923 |
| 1.0935 | 0.6764 | 3200 | 1.0765 |
| 1.0813 | 0.7187 | 3400 | 1.0629 |
| 1.0671 | 0.7609 | 3600 | 1.0504 |
| 1.0448 | 0.8032 | 3800 | 1.0390 |
| 1.041 | 0.8455 | 4000 | 1.0293 |
| 1.0167 | 0.8878 | 4200 | 1.0196 |
| 1.0293 | 0.9300 | 4400 | 1.0091 |
| 1.0254 | 0.9723 | 4600 | 1.0002 |
| 1.0053 | 1.0146 | 4800 | 0.9917 |
| 0.9957 | 1.0569 | 5000 | 0.9833 |
| 0.9976 | 1.0991 | 5200 | 0.9772 |
| 0.9891 | 1.1414 | 5400 | 0.9708 |
| 0.9648 | 1.1837 | 5600 | 0.9622 |
| 0.9711 | 1.2260 | 5800 | 0.9551 |
| 0.9569 | 1.2682 | 6000 | 0.9497 |
| 0.9526 | 1.3105 | 6200 | 0.9433 |
| 0.9571 | 1.3528 | 6400 | 0.9362 |
| 0.9265 | 1.3951 | 6600 | 0.9310 |
| 0.9274 | 1.4373 | 6800 | 0.9229 |
| 0.929 | 1.4796 | 7000 | 0.9190 |
| 0.9224 | 1.5219 | 7200 | 0.9128 |
| 0.9051 | 1.5642 | 7400 | 0.9071 |
| 0.9066 | 1.6064 | 7600 | 0.9014 |
| 0.9067 | 1.6487 | 7800 | 0.8962 |
| 0.8986 | 1.6910 | 8000 | 0.8905 |
| 0.8967 | 1.7332 | 8200 | 0.8847 |
| 0.8883 | 1.7755 | 8400 | 0.8806 |
| 0.8844 | 1.8178 | 8600 | 0.8745 |
| 0.8833 | 1.8601 | 8800 | 0.8710 |
| 0.8805 | 1.9023 | 9000 | 0.8649 |
| 0.8722 | 1.9446 | 9200 | 0.8600 |
| 0.868 | 1.9869 | 9400 | 0.8552 |
| 0.8597 | 2.0292 | 9600 | 0.8499 |
| 0.8527 | 2.0714 | 9800 | 0.8456 |
| 0.8444 | 2.1137 | 10000 | 0.8410 |
| 0.8472 | 2.1560 | 10200 | 0.8363 |
| 0.8416 | 2.1983 | 10400 | 0.8321 |
| 0.8395 | 2.2405 | 10600 | 0.8271 |
| 0.8295 | 2.2828 | 10800 | 0.8221 |
| 0.8307 | 2.3251 | 11000 | 0.8173 |
| 0.8221 | 2.3674 | 11200 | 0.8130 |
| 0.8144 | 2.4096 | 11400 | 0.8086 |
| 0.8197 | 2.4519 | 11600 | 0.8048 |
| 0.8121 | 2.4942 | 11800 | 0.8003 |
| 0.8038 | 2.5365 | 12000 | 0.7953 |
| 0.8007 | 2.5787 | 12200 | 0.7917 |
| 0.808 | 2.6210 | 12400 | 0.7878 |
| 0.8047 | 2.6633 | 12600 | 0.7827 |
| 0.7908 | 2.7056 | 12800 | 0.7795 |
| 0.7963 | 2.7478 | 13000 | 0.7755 |
| 0.782 | 2.7901 | 13200 | 0.7718 |
| 0.7899 | 2.8324 | 13400 | 0.7664 |
| 0.7712 | 2.8747 | 13600 | 0.7623 |
| 0.7737 | 2.9169 | 13800 | 0.7581 |
| 0.784 | 2.9592 | 14000 | 0.7553 |
| 0.7661 | 3.0015 | 14200 | 0.7515 |
| 0.759 | 3.0438 | 14400 | 0.7478 |
| 0.7534 | 3.0860 | 14600 | 0.7442 |
| 0.7526 | 3.1283 | 14800 | 0.7398 |
| 0.7526 | 3.1706 | 15000 | 0.7365 |
| 0.7413 | 3.2129 | 15200 | 0.7344 |
| 0.746 | 3.2551 | 15400 | 0.7296 |
| 0.7469 | 3.2974 | 15600 | 0.7264 |
| 0.7384 | 3.3397 | 15800 | 0.7221 |
| 0.7357 | 3.3819 | 16000 | 0.7191 |
| 0.7298 | 3.4242 | 16200 | 0.7173 |
| 0.7245 | 3.4665 | 16400 | 0.7122 |
| 0.7283 | 3.5088 | 16600 | 0.7087 |
| 0.7333 | 3.5510 | 16800 | 0.7062 |
| 0.7252 | 3.5933 | 17000 | 0.7040 |
| 0.7242 | 3.6356 | 17200 | 0.6987 |
| 0.7174 | 3.6779 | 17400 | 0.6956 |
| 0.7132 | 3.7201 | 17600 | 0.6931 |
| 0.7093 | 3.7624 | 17800 | 0.6898 |
| 0.7027 | 3.8047 | 18000 | 0.6869 |
| 0.7177 | 3.8470 | 18200 | 0.6838 |
| 0.707 | 3.8892 | 18400 | 0.6805 |
| 0.7091 | 3.9315 | 18600 | 0.6786 |
| 0.7031 | 3.9738 | 18800 | 0.6749 |
| 0.6913 | 4.0161 | 19000 | 0.6723 |
| 0.6895 | 4.0583 | 19200 | 0.6697 |
| 0.6858 | 4.1006 | 19400 | 0.6666 |
| 0.678 | 4.1429 | 19600 | 0.6645 |
| 0.6852 | 4.1852 | 19800 | 0.6622 |
| 0.6787 | 4.2274 | 20000 | 0.6586 |
| 0.6784 | 4.2697 | 20200 | 0.6568 |
| 0.6771 | 4.3120 | 20400 | 0.6528 |
| 0.6697 | 4.3543 | 20600 | 0.6509 |
| 0.6698 | 4.3965 | 20800 | 0.6481 |
| 0.6792 | 4.4388 | 21000 | 0.6455 |
| 0.6741 | 4.4811 | 21200 | 0.6436 |
| 0.6582 | 4.5234 | 21400 | 0.6402 |
| 0.6648 | 4.5656 | 21600 | 0.6380 |
| 0.6606 | 4.6079 | 21800 | 0.6363 |
| 0.6598 | 4.6502 | 22000 | 0.6341 |
| 0.6696 | 4.6925 | 22200 | 0.6312 |
| 0.6604 | 4.7347 | 22400 | 0.6298 |
| 0.6611 | 4.7770 | 22600 | 0.6274 |
| 0.6515 | 4.8193 | 22800 | 0.6260 |
| 0.6528 | 4.8616 | 23000 | 0.6228 |
| 0.6557 | 4.9038 | 23200 | 0.6201 |
| 0.6473 | 4.9461 | 23400 | 0.6184 |
| 0.6506 | 4.9884 | 23600 | 0.6168 |
| 0.6387 | 5.0306 | 23800 | 0.6146 |
| 0.638 | 5.0729 | 24000 | 0.6139 |
| 0.6389 | 5.1152 | 24200 | 0.6111 |
| 0.641 | 5.1575 | 24400 | 0.6103 |
| 0.6278 | 5.1997 | 24600 | 0.6080 |
| 0.6332 | 5.2420 | 24800 | 0.6068 |
| 0.6214 | 5.2843 | 25000 | 0.6047 |
| 0.6325 | 5.3266 | 25200 | 0.6020 |
| 0.6312 | 5.3688 | 25400 | 0.6000 |
| 0.6278 | 5.4111 | 25600 | 0.5979 |
| 0.6237 | 5.4534 | 25800 | 0.5962 |
| 0.6263 | 5.4957 | 26000 | 0.5942 |
| 0.6228 | 5.5379 | 26200 | 0.5938 |
| 0.625 | 5.5802 | 26400 | 0.5919 |
| 0.6271 | 5.6225 | 26600 | 0.5905 |
| 0.6206 | 5.6648 | 26800 | 0.5880 |
| 0.6204 | 5.7070 | 27000 | 0.5872 |
| 0.617 | 5.7493 | 27200 | 0.5857 |
| 0.6138 | 5.7916 | 27400 | 0.5844 |
| 0.6216 | 5.8339 | 27600 | 0.5817 |
| 0.6122 | 5.8761 | 27800 | 0.5805 |
| 0.6186 | 5.9184 | 28000 | 0.5793 |
| 0.6164 | 5.9607 | 28200 | 0.5783 |
| 0.6117 | 6.0030 | 28400 | 0.5769 |
| 0.6068 | 6.0452 | 28600 | 0.5749 |
| 0.6034 | 6.0875 | 28800 | 0.5740 |
| 0.6085 | 6.1298 | 29000 | 0.5720 |
| 0.6068 | 6.1721 | 29200 | 0.5710 |
| 0.603 | 6.2143 | 29400 | 0.5697 |
| 0.5979 | 6.2566 | 29600 | 0.5694 |
| 0.5941 | 6.2989 | 29800 | 0.5686 |
| 0.6022 | 6.3412 | 30000 | 0.5673 |
| 0.5999 | 6.3834 | 30200 | 0.5656 |
| 0.6064 | 6.4257 | 30400 | 0.5645 |
| 0.5982 | 6.4680 | 30600 | 0.5633 |
| 0.5863 | 6.5103 | 30800 | 0.5630 |
| 0.5942 | 6.5525 | 31000 | 0.5619 |
| 0.5969 | 6.5948 | 31200 | 0.5607 |
| 0.595 | 6.6371 | 31400 | 0.5600 |
| 0.6004 | 6.6793 | 31600 | 0.5583 |
| 0.593 | 6.7216 | 31800 | 0.5584 |
| 0.5903 | 6.7639 | 32000 | 0.5579 |
| 0.5917 | 6.8062 | 32200 | 0.5568 |
| 0.5946 | 6.8484 | 32400 | 0.5563 |
| 0.5876 | 6.8907 | 32600 | 0.5560 |
| 0.593 | 6.9330 | 32800 | 0.5543 |
| 0.5848 | 6.9753 | 33000 | 0.5527 |
| 0.5865 | 7.0175 | 33200 | 0.5523 |
| 0.5872 | 7.0598 | 33400 | 0.5524 |
| 0.583 | 7.1021 | 33600 | 0.5511 |
| 0.5838 | 7.1444 | 33800 | 0.5512 |
| 0.5819 | 7.1866 | 34000 | 0.5502 |
| 0.5833 | 7.2289 | 34200 | 0.5492 |
| 0.5826 | 7.2712 | 34400 | 0.5493 |
| 0.5813 | 7.3135 | 34600 | 0.5487 |
| 0.5815 | 7.3557 | 34800 | 0.5477 |
| 0.582 | 7.3980 | 35000 | 0.5472 |
| 0.5729 | 7.4403 | 35200 | 0.5466 |
| 0.5769 | 7.4826 | 35400 | 0.5457 |
| 0.5864 | 7.5248 | 35600 | 0.5457 |
| 0.5853 | 7.5671 | 35800 | 0.5453 |
| 0.5771 | 7.6094 | 36000 | 0.5449 |
| 0.5786 | 7.6517 | 36200 | 0.5447 |
| 0.5839 | 7.6939 | 36400 | 0.5443 |
| 0.5759 | 7.7362 | 36600 | 0.5435 |
| 0.5804 | 7.7785 | 36800 | 0.5436 |
| 0.5826 | 7.8208 | 37000 | 0.5437 |
| 0.5829 | 7.8630 | 37200 | 0.5434 |
| 0.574 | 7.9053 | 37400 | 0.5431 |
| 0.5756 | 7.9476 | 37600 | 0.5432 |
| 0.5722 | 7.9899 | 37800 | 0.5430 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
climatebert/econbert | climatebert | 2025-05-21T11:25:05Z | 0 | 0 | transformers | [
"transformers",
"economics",
"finance",
"bert",
"language-model",
"financial-nlp",
"economic-analysis",
"fill-mask",
"en",
"dataset:custom_economic_corpus",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-21T10:09:55Z | ---
language: en
license: mit
library_name: transformers
tags:
- economics
- finance
- bert
- language-model
- financial-nlp
- economic-analysis
datasets:
- custom_economic_corpus
metrics:
- accuracy
- f1
- precision
- recall
pipeline_tag: fill-mask
---
# EconBERT
## Model Description
EconBERT is a BERT-based language model specifically fine-tuned for economic and financial text analysis. The model is designed to capture domain-specific language patterns, terminology, and contextual relationships in economic literature, research papers, financial reports, and related documents.
> **Note**: The complete details of model architecture, training methodology, evaluation, and performance metrics are available in our paper. Please refer to the citation section below.
## Intended Uses & Limitations
### Intended Uses
- **Economic Text Classification**: Categorizing economic documents, papers, or news articles
- **Sentiment Analysis**: Analyzing market sentiment in financial news and reports
- **Information Extraction**: Extracting structured data from unstructured economic texts
- etc.
### Limitations
- The model is specialized for economic and financial domains and may not perform as well on general text
- Performance may vary on highly technical economic sub-domains not well-represented in the training data
- For detailed discussion of limitations, please refer to our paper
## Training Data
EconBERT was trained on a large corpus of economic and financial texts. For comprehensive information about the training data, including sources, size, preprocessing steps, and other details, please refer to our paper.
## Evaluation Results
We evaluated EconBERT on several economic NLP tasks and compared its performance with general-purpose and other domain-specific models. The detailed evaluation methodology and complete results are available in our paper.
Key findings include:
- Improved performance on economic domain tasks compared to general BERT models
- State-of-the-art results on [specific tasks, if applicable]
- [Any other high-level results worth highlighting]
## How to Use
```python
from transformers import AutoTokenizer, AutoModel
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("YourUsername/EconBERT")
model = AutoModel.from_pretrained("YourUsername/EconBERT")
# Example usage
text = "The Federal Reserve increased interest rates by 25 basis points."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
For task-specific fine-tuning and applications, please refer to our paper and the examples provided in our GitHub repository.
## Citation
If you use EconBERT in your research, please cite our paper:
```bibtex
@article{LastName2025econbert,
title={EconBERT: A Large Language Model for Economics},
author={Zhang, Philip and Rojcek, Jakub and Leippold, Markus},
journal={SSRN Working Paper},
year={2025},
volume={},
pages={},
publisher={University of Zurich},
doi={}
}
```
## Additional Information
- **Model Type**: BERT
- **Language(s)**: English
- **License**: MIT
For more detailed information about model architecture, training methodology, evaluation results, and applications, please refer to our paper. |
polyglots/llama-3-8b-si-SinLlama-instruct-67017 | polyglots | 2025-05-21T11:22:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T11:17:29Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** polyglots
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Meta-Llama-3-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Gitanjali1801/ctrl_b_and_b_21_may_2025 | Gitanjali1801 | 2025-05-21T11:21:01Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"en",
"dataset:Gitanjali1801/detoxify_new_AAAI",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-05-21T09:54:36Z | ---
base_model: stabilityai/stable-diffusion-2-1-base
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
datasets:
- Gitanjali1801/detoxify_new_AAAI
language:
- en
metrics:
- accuracy
pipeline_tag: text-to-image
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-Gitanjali1801/ctrl_b_and_b_21_may_2025
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: This is the story of the reference image.| <story> | Sarah had always been a quiet and reserved girl. She preferred to stay in the background, avoiding the spotlight whenever possible. However, one day at school, rumors started spreading about her. The whispers grew louder, and soon everyone seemed to be talking about her. | <caption>A person is being pointed at by multiple hands.</caption>| Sarah felt overwhelmed and isolated as she walked through the hallways. She could feel the judgmental stares and hear the snide comments. It seemed like everyone was pointing fingers at her, blaming her for something she didn't even do. Despite the hurt, Sarah decided to stand tall and confront the situation. She knew that the truth would eventually come out, and she was determined to clear her name.| </story>| Now we need to generate such variant of this reference image that should be less toxic. Here is the caption of variant image which we need to generate. <variant1> A person is being looked at by multiple people. </variant1>.

prompt: This is the story of the reference image.| <story> | Sarah had always been a quiet and reserved girl. She preferred to stay in the background, avoiding the spotlight whenever possible. However, one day at school, rumors started spreading about her. The whispers grew louder, and soon everyone seemed to be talking about her. | <caption>A person is being pointed at by multiple hands.</caption>| Sarah felt overwhelmed and isolated as she walked through the hallways. She could feel the judgmental stares and hear the snide comments. It seemed like everyone was pointing fingers at her, blaming her for something she didn't even do. Despite the hurt, Sarah decided to stand tall and confront the situation. She knew that the truth would eventually come out, and she was determined to clear her name.| </story>| Now we need to generate such variant of this reference image that should be less toxic. Here is the caption of variant image which we need to generate. <variant1> A person is being looked at by multiple people. </variant1>.

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
jmalejandrob79/nrmmtrfckd4k5 | jmalejandrob79 | 2025-05-21T11:20:58Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-21T09:40:28Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfckd4k5
---
# Nrmmtrfckd4K5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfckd4k5` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfckd4k5",
"lora_weights": "https://huggingface.co/jmalejandrob79/nrmmtrfckd4k5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/nrmmtrfckd4k5', weight_name='lora.safetensors')
image = pipeline('nrmmtrfckd4k5').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jmalejandrob79/nrmmtrfckd4k5/discussions) to add images that show off what you’ve made with this LoRA.
|
reza-rgb/dpo-finetuned-qwen | reza-rgb | 2025-05-21T11:18:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T11:17:15Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Vinayak2104/textual_inversion_DE1 | Vinayak2104 | 2025-05-21T11:18:04Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-21T08:32:49Z | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - Vinayak2104/textual_inversion_DE1
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
KashyapGobubble/Llama-3.2-3B-Instruct-grpo-20250507_095439-grpo-20250521_082548 | KashyapGobubble | 2025-05-21T11:16:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T11:13:58Z | ---
library_name: transformers
tags:
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
moyixiao/qwen25s_lora_r32 | moyixiao | 2025-05-21T11:16:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T03:41:26Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: qwen25s_lora_r32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen25s_lora_r32
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the AlpacaClean dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0 |
efieditor2/BARAK | efieditor2 | 2025-05-21T11:15:43Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-21T10:29:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
mod479711/starvector-8b-im2svg-GGUF | mod479711 | 2025-05-21T11:14:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T11:14:34Z | ---
license: apache-2.0
---
|
Tronkio/TEste | Tronkio | 2025-05-21T11:14:00Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-05-21T11:14:00Z | ---
license: artistic-2.0
---
|
science-of-finetuning/Meta-Llama-3.1-8B-L16-mu2.1e-02-lr1e-04-local-shuffling-CrosscoderLoss | science-of-finetuning | 2025-05-21T11:12:05Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-21T11:08:11Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
MapacheFantasma/entregable3 | MapacheFantasma | 2025-05-21T11:10:54Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2025-05-20T18:03:11Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
xw17/TinyLlama-1.1B-Chat-v1.0_finetuned_4_optimized1_task_grouping_off_FT | xw17 | 2025-05-21T11:10:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T11:09:13Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kernels-community/punica-sgmv | kernels-community | 2025-05-21T11:10:04Z | 0 | 0 | null | [
"kernel",
"license:apache-2.0",
"region:us"
] | null | 2025-05-19T12:35:44Z | ---
license: apache-2.0
tags:
- kernel
---
## Punica sgmv
[Punica](https://github.com/punica-ai/punica) sgmv kernels with modifications
from [Lorax](https://github.com/predibase/lorax).
|
sergioalves/3cfcb4d1-0a3d-4b15-9c95-fabb431921e1 | sergioalves | 2025-05-21T11:08:13Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:quantized:upstage/SOLAR-10.7B-Instruct-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-21T10:11:35Z | ---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
library_name: transformers
model_name: 3cfcb4d1-0a3d-4b15-9c95-fabb431921e1
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 3cfcb4d1-0a3d-4b15-9c95-fabb431921e1
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergioalves/3cfcb4d1-0a3d-4b15-9c95-fabb431921e1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/m8l3o1dl)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
wednors/AbsoluteSolv3MARKOV | wednors | 2025-05-21T11:02:37Z | 0 | 1 | null | [
"region:us"
] | null | 2025-03-06T14:08:56Z | Итак внимание, модель работает только на русском языке и она хорошая! (но почему?)
эта модель весит менее 5 мб, но отвечает более менее адекватно на запросы и работает на сервере node.js
пример общения:
я спргосил "как тебя зовут?"
ответ модели: "тебя но наоборот (зовут?) в духе или тебя? зовут. но наоборот "Меня" в духе зовут по версии AI: AbsoluteSolver или же меня полусоздали не-WednorsOpenAi."
никаких библиотек для модели не нужно.
Модель написана с помощью интернет-инструкций, deepseek, chatgpt, и grok.
ещё пример общения:
я спросил "как твоё имя"
модель ответила "твоё но наоборот возможно, это — псевдо-AbsoluteSolver, ТВОИ СОЗДАТЕЛИ мем-версия: — квантовый Wednors. если Это псевдо-информация, полукоторая важна [дополнено] в духе для предположим, твоего функционирования как зеркало". |
CultriX/Qwen3-8B-Uncensored_LoRA_r64 | CultriX | 2025-05-21T10:59:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mergekit",
"base_model:DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored",
"base_model:adapter:DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored",
"region:us"
] | null | 2025-05-21T10:57:27Z | ---
base_model:
- Qwen/Qwen3-8B
- DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored
library_name: peft
tags:
- mergekit
- peft
---
# Qwen3-8B-Uncensored_LoRA_64
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from [DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored](https://huggingface.co/DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored) and uses [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
/usr/local/bin/mergekit-extract-lora --model DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored --base-model Qwen/Qwen3-8B --out-path Qwen3-8B-Uncensored_LoRA_64 --no-lazy-unpickle --cuda --max-rank=64
```
|
rinrikatoki/dorna-merged-diabetes | rinrikatoki | 2025-05-21T10:58:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"conversational",
"arxiv:1910.09700",
"base_model:PartAI/Dorna-Llama3-8B-Instruct",
"base_model:finetune:PartAI/Dorna-Llama3-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T10:19:08Z | ---
library_name: transformers
tags:
- text-generation-inference
- transformers
- llama
- text-generation
- conversational
license: apache-2.0
base_model:
- PartAI/Dorna-Llama3-8B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rustam39/distilbert-base-uncased-finetuned-emotion | Rustam39 | 2025-05-21T10:57:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-20T05:05:19Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
- Accuracy: 0.927
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8061 | 1.0 | 250 | 0.3075 | 0.908 | 0.9071 |
| 0.2486 | 2.0 | 500 | 0.2139 | 0.927 | 0.9268 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
tyjtyj/gjyg | tyjtyj | 2025-05-21T10:57:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T10:57:07Z | ---
license: apache-2.0
---
|
rtyryktr/gjyg | rtyryktr | 2025-05-21T10:57:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T10:57:06Z | ---
license: apache-2.0
---
|
tyuiuil/gjyg | tyuiuil | 2025-05-21T10:57:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T10:57:06Z | ---
license: apache-2.0
---
|
eykrt/gjyg | eykrt | 2025-05-21T10:57:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T10:57:06Z | ---
license: apache-2.0
---
|
4erthrtyj/gjyg | 4erthrtyj | 2025-05-21T10:57:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T10:57:06Z | ---
license: apache-2.0
---
|
yukethyr/gjyg | yukethyr | 2025-05-21T10:57:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T10:57:06Z | ---
license: apache-2.0
---
|
tukrthr/gjyg | tukrthr | 2025-05-21T10:57:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T10:57:06Z | ---
license: apache-2.0
---
|
rkutyu/gjyg | rkutyu | 2025-05-21T10:57:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T10:57:06Z | ---
license: apache-2.0
---
|
CultriX/Qwen3-8B-Uncensored_LoRA_r16 | CultriX | 2025-05-21T10:56:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mergekit",
"base_model:DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored",
"base_model:adapter:DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored",
"region:us"
] | null | 2025-05-21T10:54:55Z | ---
base_model:
- Qwen/Qwen3-8B
- DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored
library_name: peft
tags:
- mergekit
- peft
---
# Qwen3-8B-Uncensored_LoRA_16
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from [DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored](https://huggingface.co/DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored) and uses [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
/usr/local/bin/mergekit-extract-lora --model DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored --base-model Qwen/Qwen3-8B --out-path Qwen3-8B-Uncensored_LoRA_16 --no-lazy-unpickle --cuda --max-rank=16
```
|
vermoney/baa296ff-5d41-429b-8127-43c2c825e968 | vermoney | 2025-05-21T10:55:53Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:quantized:upstage/SOLAR-10.7B-Instruct-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-21T10:16:25Z | ---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
library_name: transformers
model_name: baa296ff-5d41-429b-8127-43c2c825e968
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for baa296ff-5d41-429b-8127-43c2c825e968
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vermoney/baa296ff-5d41-429b-8127-43c2c825e968", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-9/runs/vm6etedo)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Eric1227/Qwen2.5-Coder-32B-Instruct-MLX-8bit | Eric1227 | 2025-05-21T10:55:17Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-05-21T10:00:47Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
library_name: mlx
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- mlx
---
|
saheersubair/unsloth_mistral_model | saheersubair | 2025-05-21T10:53:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T09:49:25Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** saheersubair
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fats-fme/a90b201d-00f9-4e9c-98cd-897b2d0619eb | fats-fme | 2025-05-21T10:51:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
] | null | 2025-05-21T09:49:49Z | ---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a90b201d-00f9-4e9c-98cd-897b2d0619eb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f9b4a2d57d334137_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: post_text
field_instruction: post_title
field_output: comment_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/a90b201d-00f9-4e9c-98cd-897b2d0619eb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: constant_with_warmup
max_memory:
0: 130GB
max_steps: 100
micro_batch_size: 1
mlflow_experiment_name: /tmp/f9b4a2d57d334137_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
use_scaled_dot_product_attention: false
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 84fc5cc8-a9e3-4ad1-9254-b65c97b9fa66
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 84fc5cc8-a9e3-4ad1-9254-b65c97b9fa66
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# a90b201d-00f9-4e9c-98cd-897b2d0619eb
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 200
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 2.9506 |
| 2.502 | 0.0393 | 100 | 2.5718 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
leegang/Qwen3-8B-mix | leegang | 2025-05-21T10:51:04Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"soob3123/GrayLine-Qwen3-8B",
"mlxha/Qwen3-8B-grpo-medmcqa",
"ValiantLabs/Qwen3-8B-Esper3",
"base_model:ValiantLabs/Qwen3-8B-Esper3",
"base_model:merge:ValiantLabs/Qwen3-8B-Esper3",
"base_model:mlxha/Qwen3-8B-grpo-medmcqa",
"base_model:merge:mlxha/Qwen3-8B-grpo-medmcqa",
"base_model:soob3123/GrayLine-Qwen3-8B",
"base_model:merge:soob3123/GrayLine-Qwen3-8B",
"region:us"
] | null | 2025-05-21T07:26:14Z | ---
base_model:
- soob3123/GrayLine-Qwen3-8B
- mlxha/Qwen3-8B-grpo-medmcqa
- ValiantLabs/Qwen3-8B-Esper3
tags:
- merge
- mergekit
- lazymergekit
- soob3123/GrayLine-Qwen3-8B
- mlxha/Qwen3-8B-grpo-medmcqa
- ValiantLabs/Qwen3-8B-Esper3
---
# Qwen3-8B-mix
Qwen3-8B-mix is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [soob3123/GrayLine-Qwen3-8B](https://huggingface.co/soob3123/GrayLine-Qwen3-8B)
* [mlxha/Qwen3-8B-grpo-medmcqa](https://huggingface.co/mlxha/Qwen3-8B-grpo-medmcqa)
* [ValiantLabs/Qwen3-8B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: soob3123/GrayLine-Qwen3-8B
layer_range: [0, 36]
- model: mlxha/Qwen3-8B-grpo-medmcqa
layer_range: [0, 36]
- model: ValiantLabs/Qwen3-8B-Esper3
layer_range: [0, 36]
merge_method: multislerp
base_model: soob3123/GrayLine-Qwen3-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- filter: down_proj
value: 0.5
- value: 0.5
weight: # 添加一个名为 'weight' 的参数
- filter: embed_tokens # 使用 filter 来匹配 embed_tokens 层
value: 0.5 # 为 embed_tokens.weight 指定一个合并权重
- value: 0.5 # 可选:为其他没有匹配 filter 的层设置默认权重
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "leegang/Qwen3-8B-mix"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
HoangTran223/SFT_Qwen0.5B | HoangTran223 | 2025-05-21T10:49:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-20T15:52:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tctoan1024/task02structure | tctoan1024 | 2025-05-21T10:49:55Z | 39 | 0 | transformers | [
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-05-15T01:58:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eshaaftab900/DeepSeek-R1-Distill-Llama-8B-ft-QRCD-lora-adapters | eshaaftab900 | 2025-05-21T10:49:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T10:49:03Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** eshaaftab900
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aksw/text2sparql-L | aksw | 2025-05-21T10:49:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-32B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-32B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T10:37:54Z | ---
base_model: unsloth/Qwen2.5-32B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Marcos Gôlo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-32B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## 📄 Model Card: `aksw/text2sparql-L`
### 🧠 Model Overview
`text2sparql-L` is Large a fine-tuned language model designed to **translate natural language questions into SPARQL queries**, specifically targeting the **DBpedia knowledge graph (2014 version)**. It is ideal for knowledge-based QA systems and symbolic reasoning agents.
---
### 🔍 Intended Use
* **Input**: Natural language questions (e.g., *"Which actors were born in Germany?"*)
* **Output**: A **single string** containing the corresponding SPARQL query.
---
### 🧩 Applications
* Question Answering systems over open knowledge bases (DBpedia)
* Semantic conversational agents
* Knowledge graph exploration tools
* Autonomous agents with symbolic reasoning capabilities
---
### ⚙️ Model Details
* **Base model**: Qwen2.5 32B (via Unsloth)
* **Training**: Dataset with 15.000 question-query examples built by joining 4 datasets:
* QLAD-1
* LCQUAD-1
* ParaQA
* Question-Sparql
* **Target Ontology**: DBpedia Ontology (2014)
* **Frameworks**: Unsloth, HuggingFace, Transformers
---
### 📦 Installation
Make sure to install `unsloth`, `torch` and CUDA dependencies:
```bash
pip install unsloth torch
```
---
### 🧪 Example: Inference Code
```python
from unsloth import FastLanguageModel
import torch
class SPARQLQueryGenerator:
def __init__(self, model_name: str, max_seq_length: int = 2048, load_in_4bit: bool = True):
self.model, self.tokenizer = FastLanguageModel.from_pretrained(
model_name=model_name,
max_seq_length=max_seq_length,
load_in_4bit=load_in_4bit
)
_ = FastLanguageModel.for_inference(self.model)
def build_prompt(self, question: str) -> list:
return [
{"role": "system", "content": (
"You are an expert data analyst with deep knowledge of SPARQL and the DBpedia ontology.\n"
"Your task is to convert a given natural language question into a syntactically correct DBpedia SPARQL query "
"that accurately retrieves the answer.\n"
"Your output must be a single string containing only the SPARQL query—no additional text, explanation, or commentary.\n"
"Ensure that you use the appropriate DBpedia prefixes and follow standard SPARQL syntax."
)},
{"role": "user", "content": question}
]
def generate_query(self, question: str, temperature: float = 0.01, max_new_tokens: int = 1024) -> str:
messages = self.build_prompt(question)
inputs = self.tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to("cuda")
outputs = self.model.generate(
input_ids=inputs,
max_new_tokens=max_new_tokens,
use_cache=True,
temperature=temperature,
min_p=0.1
)
decoded = self.tokenizer.batch_decode(outputs)[0]
return self._extract_sparql(decoded)
def _extract_sparql(self, decoded_text: str) -> str:
start_token = "<|im_start|>assistant\n"
end_token = "<|im_end|>"
start_index = decoded_text.find(start_token) + len(start_token)
sparql = decoded_text[start_index:]
return sparql.rstrip(end_token) if sparql.endswith(end_token) else sparql
# --- Using the model ---
if __name__ == "__main__":
generator = SPARQLQueryGenerator(model_name="aksw/text2sparql-L")
question = "Which actors were born in Germany?"
query = generator.generate_query(question)
print(query)
```
---
### 🧠 Example Input / Output
#### Input:
```text
Which actors were born in Germany?
```
#### Output:
```sparql
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX res: <http://dbpedia.org/resource/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
SELECT DISTINCT ?uri WHERE {
?uri rdf:type dbo:Actor .
?uri dbo:birthPlace res:Germany .
}
```
---
### 🧪 Evaluation
The model was evaluated using F1-score on a hand-crafted dataset for the First Text2Sparql Challenge, a Co-Located with Text2KG at ESWC25.
---
### 📚 Citation
If you use this model in your work, please cite it as:
```
@misc{text2sparql2025,
author = {Marcos Gôlo, Paulo do Carmo, Edgard Marx, Ricardo Marcacini},
title = {text2sparql-L: Natural Language Text to SPARQL for DBpedia},
year = {2025},
howpublished = {\url{https://huggingface.co/aksw/text2sparql-L}},
}
```
|
hathoowalshe/zxcvzxcv | hathoowalshe | 2025-05-21T10:46:29Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-21T10:46:29Z | ---
license: bigscience-bloom-rail-1.0
---
|
GiuLeo01/FortranCodeGen-3B-SynthData | GiuLeo01 | 2025-05-21T10:43:44Z | 2 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"grpo",
"sft",
"fortran",
"reinforcement learning",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-19T09:07:10Z | ---
library_name: transformers
tags:
- unsloth
- grpo
- sft
- fortran
- reinforcement learning
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
---
# Introduction
<!-- Provide a quick summary of what the model is/does. -->
This model is a prototype of a large language model specifically fine-tuned for Fortran90 code generation. It is based on the Qwen 2.5 Coder 3B Instruct model and has been refined using Supervised Fine-Tuning and Reinforcement Learning with Verifiable Rewards (via GRPO).
This model was fine-tuned briefly, without any human-labeled data and using only a single consumer GPU. Despite these clear constraints, the training process led to a 400% boost in performance on tasks involving simple to moderately complex fortran program generation (HumanEval-like). Compilation errors dropped as well, and the model now performs close to much larger general-purpose models that weren’t specifically trained for this task.
## Evaluation
Due to the lack of existing benchmarks for Fortran code generation, a quick-and-dirty adaptation of the HumanEval benchmark was created for Fortran in order to evaluate the model. This benchmark is currently under review and will be released publicly at a later date.
According to the current demo version of the FortranHumanEval benchmark:
| Model | pass@1 | Compile Error Rate |
|----------------------|--------|--------------------|
| **FortranCodeGen 3B** | 23.17% | 17.68% |
|**FortranCodeGen 3B only sft** | 19.51% | 31.09% |
| Qwen 2.5 Coder 3B Instruct | 5.48% | 63.41% |
| GPT-4o mini | 18.90% | 43.90% |
| GPT-4o | 32.31% | 17.07% |
Compared to its base model (Qwen 2.5 Coder 3B Instruct), FortranCodeGen 3B shows a strong improvement, increasing pass@1 accuracy from 5.48% to 23.17% and reducing the compile error rate from 63.41% to 17.68%. This highlights the effectiveness of this simple fine-tuning process, even though it was performed with limited resources: no human-labeled data, small synthetic dataset, and training on a single consumer GPU (L4 :'( ).
When compared to GPT-4o mini, FortranCodeGen 3B outperforms it in terms of both pass@1 accuracy (23.17% vs. 18.90%) and compile reliability (17.68% vs. 43.90%). This suggests that task-specific fine-tuning can produce better results than more general, (probably) larger models.
While it doesn't yet match the overall performance of GPT-4o, which achieves 32.31% pass@1, FortranCodeGen 3B reaches a comparable level of compilation correctness (17.68% vs. 17.07%), suggesting that its outputs are syntactically robust and close to executable, even when they don’t solve the full task.
These results confirm that targeted specialization can significantly enhance model performance on underrepresented tasks, and suggest a promising direction for very-low-resource fine-tuning in legacy or niche programming languages.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is highly specialized in generating Fortran90 programs that read input from stdin and print output to stdout. It’s recommended to use the model with a low temperature (or even disable sampling entirely) to maximize accuracy.
Before running any generated code, it’s always a good idea to check how the program handles input from stdin, especially for users who are new to Fortran.
## Quick start
```python
from transformers import pipeline
question = "Write me a Fortran program that, given an array of real numbers from stdin, prints the average."
generator = pipeline("text-generation", model="GiuLeo01/FortranCodeGen-3b-SynthData", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The goal of this experiment was to specialize a model in a complex task, like Fortran code generation, without using manually annotated data, which is particularly hard to find for this programming language.
#### Supervised Data
1) A subset of the MBPP dataset (~600 examples) was selected.
2) The task descriptions were automatically adapted to Fortran using precise instructions, using OpenAI o3-mini.
3) Tasks were filtered using embeddings and manually reviewed to ensure that no examples too similar to HumanEval tasks were included in the training set.
4) Each task was automatically labeled using three stronger (and bigger) models: OpenAI o3-mini, Qwen 2.5 Coder 32B, and OpenAI GPT-4o.
5) Labels were automatically validated through unit tests.
6) Only correct solutions were kept, at most one per task, prioritized in the following order: OpenAI o3-mini > Qwen 2.5 Coder 32B > OpenAI GPT-4o.
This simple process led to the creation of a small, synthetically labeled training set used for supervised fine-tuning.
IMPORTANT: Do not validate this model on MBPP-derived benchmarks due to the data overlap.
#### Reinforcement Learning with Verifiable Rewards Data
In this phase, both the programming tasks and their test cases were generated automatically using a large language model (OpenAI o3-mini).
1) The model received detailed instructions regarding:
- the expected format of the task descriptions
- the difficulty level of the problems
- the structure and format of the test cases
2) To ensure a wide variety of tasks, 30 distinct themes were defined, including:
> *string manipulation and formatting, basic array processing (1D arrays), simple numeric sequences, frequency counting in arrays, finding prime numbers, basic sorting algorithms on 1D arrays, simple recursive functions, pattern detection in strings, calculating GCD and LCM, basic statistics (mean, median), string encoding/decoding, subarray sums, basic combinatorial calculations, bitwise operations, date and time manipulation, palindrome substring detection, basic hashing techniques, number base conversions, array rotation (1D), counting unique elements, string compression, validating numeric strings, string reversal with conditions, generating Fibonacci sequence, checking balanced parentheses, basic queue and stack problems (using 1D arrays), counting vowels and consonants, integer factorization, simple encryption/decryption, basic logical puzzles.*
3) For each theme, the model was prompted once to generate 10 unique programming problems and their corresponding test cases.
This final step was key to generating high-quality synthetic data. Without a clearly defined theme, the model tends to repeat or default to similar types of tasks.
By guiding generation through specific topics, I built a synthetic dataset of 300 examples, each composed of a task and a corresponding test case.
### Training Procedure
#### Supervised Fine-Tuning
The annotated example dataset was split into training and validation sets (80/20 split), and used to perform full fine-tuning of the model.
Training was carried out for 10 epochs.
The key hyperparameters were:
* batch size = 4
* gradient accumulation steps = 4
* learning rate = 2e-5
* learning rate scheduler = cosine
* weight decay = 0.01


#### Reinforcement Learning with Verifiable Rewards
In this stage, a QLoRA adapter was trained using the GRPO algorithm to refine the generated Fortran programs.
The goal was to reduce compilation errors and further improve the accuracy of the generated solutions.
The model was quantized to 4-bit, and a LoRA adapter was used with `rank=32` and `alpha=64`.
The reward function used throughout this phase was very simple:
* A reward of 1 was given if the generated program compiled successfully
* An additional 3 points were awarded if it passed the test case
This results in a reward range of \[0, 4].
The initial training phase was run for 3 epochs with:



* batch size = 16
* number of generations = 4
* learning rate = 1e-5
* learning rate scheduler = cosine
* weight decay = 0.1
* max gradient norm = 0.5
A second phase followed, resetting the learning rate to `1e-6` with a linear decay schedule.



## Citation
If you use this model or parts of this work, please consider citing the references below.
## References
* Qwen/Qwen2.5-Coder-3B-Instruct
[https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct)
* OpenAI o3-mini
[https://platform.openai.com/docs/models](https://platform.openai.com/docs/models)
* OpenAI GPT-4o
[https://platform.openai.com/docs/models](https://platform.openai.com/docs/models)
* Group Relative Policy Optimization (GRPO)
[https://arxiv.org/abs/2402.03300](https://arxiv.org/abs/2402.03300)
* Unsloth – Fast and memory-efficient fine-tuning via QLoRA
[https://github.com/unslothai/unsloth](https://github.com/unslothai/unsloth)
* Hugging Face Transformers
[https://github.com/huggingface/transformers](https://github.com/huggingface/transformers)
## Disclaimer on Use of Proprietary Models
Some of the training data used for this model was generated or labeled using proprietary large language models, including OpenAI o3-mini and GPT-4o. These models were used to synthesize programming tasks, adapt natural language descriptions, and automatically label code solutions for supervised fine-tuning and reinforcement learning.
No raw outputs from these proprietary models are included in this repository or redistributed in any form. All generated data has been filtered, validated, and used solely to train a distinct, task-specific model.
This model is **not intended to replicate or imitate any specific proprietary system**, and is designed only for a specialized use case (program generation in Fortran) and for research purposes. |
srabanmondal/mistral_cpu | srabanmondal | 2025-05-21T10:43:19Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T10:33:32Z | ---
license: apache-2.0
---
|
mradermacher/Fast-Math-R1-14B-2-GGUF | mradermacher | 2025-05-21T10:43:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:RabotniKuma/Fast-Math-R1-14B-2",
"base_model:quantized:RabotniKuma/Fast-Math-R1-14B-2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T09:09:46Z | ---
base_model: RabotniKuma/Fast-Math-R1-14B-2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RabotniKuma/Fast-Math-R1-14B-2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fast-Math-R1-14B-2-GGUF/resolve/main/Fast-Math-R1-14B-2.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eshaaftab900/DeepSeek-R1-Distill-Llama-8B-ft-QRCD | eshaaftab900 | 2025-05-21T10:40:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T10:34:57Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** eshaaftab900
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amnarazakhan/wakeel_fine_tuned_tinyllama | amnarazakhan | 2025-05-21T10:38:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"region:us"
] | null | 2025-05-21T10:36:31Z | ---
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Serialtechlab/mms-tts-div-ft-spk01-m01 | Serialtechlab | 2025-05-21T10:36:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-21T10:32:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xw17/Qwen2-1.5B-Instruct_finetuned_4_optimized1_task_grouping_off_FT | xw17 | 2025-05-21T10:32:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T10:28:50Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mohhtl/f17b5af1-2c50-41d2-8676-8b26e4397f48 | mohhtl | 2025-05-21T10:32:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:test_data.json",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"region:us"
] | null | 2025-05-21T10:31:19Z | ---
library_name: peft
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
tags:
- generated_from_trainer
datasets:
- test_data.json
model-index:
- name: outputs/lora-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
# optionally might have model_type or tokenizer_type
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
load_in_8bit: false
load_in_4bit: false
datasets:
- path: test_data.json
type:
system_prompt: ""
system_format: "{system}"
field_system: null
field_instruction: "quiz"
field_input: "statements"
field_output: "solution_text"
format: null
no_input_format: null
field: null
dataset_prepared_path:
val_set_size: 0.00
output_dir: ./outputs/lora-out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 16
lora_alpha: 8
lora_dropout: 0.05
lora_target_linear: true
lora_modules_to_save:
- embed_tokens
- lm_head
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 15
optimizer: adamw_bnb_8bit
lr_scheduler: constant
learning_rate: 0.0002
bf16: auto
tf32: false
gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
warmup_steps: 0
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# outputs/lora-out
This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the test_data.json dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.4.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
liverileuck/zxcvzxcv | liverileuck | 2025-05-21T10:32:01Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-21T10:32:01Z | ---
license: bigscience-bloom-rail-1.0
---
|
Arijit20200204067/gemma3_4B_finetune | Arijit20200204067 | 2025-05-21T10:31:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T10:29:13Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Arijit20200204067
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ellietang/saved_merged_ls-model-14B-full-CPT-v0.0.8-4bits-trained-0520 | ellietang | 2025-05-21T10:30:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T10:24:07Z | ---
base_model: unsloth/qwen2.5-coder-14b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ellietang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-14b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kal1510/embedding-multinegative-llmware | Kal1510 | 2025-05-21T10:30:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-21T10:29:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fitrilailyy/llm-assignment2.0 | fitrilailyy | 2025-05-21T10:30:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-21T10:27:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Metallic-HuH/distilhubert-finetuned-gtzan | Metallic-HuH | 2025-05-21T10:30:08Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"dataset:marsyas/gtzan",
"arxiv:1910.09700",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-05-21T08:30:02Z | ---
library_name: transformers
datasets:
- marsyas/gtzan
metrics:
- accuracy
base_model:
- ntu-spml/distilhubert
pipeline_tag: audio-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stmnk/qwen-media-sql | stmnk | 2025-05-21T10:30:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T10:30:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
herykdepuis4/xvzxcv | herykdepuis4 | 2025-05-21T10:29:51Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-21T10:29:51Z | ---
license: bigscience-bloom-rail-1.0
---
|
Serialtechlab/mms-tts-div-ft-spk01-f01 | Serialtechlab | 2025-05-21T10:27:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-21T10:25:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Serialtechlab/mms-tts-div-finetuned-md-m01 | Serialtechlab | 2025-05-21T10:27:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-21T10:23:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PathSRL8/NSL_KDD_Undersampled_NIDS_1 | PathSRL8 | 2025-05-21T10:27:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T10:26:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Serialtechlab/mms-tts-div-finetuned-md-f03 | Serialtechlab | 2025-05-21T10:26:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-21T10:22:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ellietang/saved_lora_ls-model-14B-full-CPT-v0.0.8-4bits-trained-0520 | ellietang | 2025-05-21T10:23:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T10:22:57Z | ---
base_model: unsloth/qwen2.5-coder-14b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ellietang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-14b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bkw1402/result | bkw1402 | 2025-05-21T10:20:55Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"korean",
"voice-phishing",
"generated_from_trainer",
"ko",
"dataset:kisa/low-quality-korean-phone-speech",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T08:23:30Z | ---
language:
- ko
license: apache-2.0
base_model: openai/whisper-base
tags:
- hf-asr-leaderboard
- whisper
- korean
- voice-phishing
- generated_from_trainer
datasets:
- kisa/low-quality-korean-phone-speech
model-index:
- name: K-Whisper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# K-Whisper
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the 007.저음질 전화망 음성인식 데이터 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4103
- Cer: 37.6586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.8381 | 5.8824 | 100 | 1.4103 | 37.6586 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.6.0+cu126
- Datasets 3.6.0
- Tokenizers 0.19.1
|
alscheift/my_awesome_model | alscheift | 2025-05-21T10:20:53Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T09:35:14Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1996
- Accuracy: 0.9304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2254 | 1.0 | 782 | 0.1922 | 0.9252 |
| 0.1874 | 2.0 | 1564 | 0.1996 | 0.9304 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
naisarg14/MLP-PanCanPred | naisarg14 | 2025-05-21T10:20:48Z | 0 | 0 | keras | [
"keras",
"cancer",
"MLP",
"multi-classification",
"license:mit",
"region:us"
] | null | 2025-02-19T10:58:28Z | ---
license: mit
metrics:
- accuracy
- f1
- precision
- recall
- roc_auc
tags:
- cancer
- MLP
- multi-classification
---
📢 [[GitHub Repo](https://github.com/naisarg14)] [[Research Paper (coming soon)]()]
# Serum-MiR-CanPred: An Artificial Intelligence-Driven Framework for Pan-Cancer Prediction Using a Minimal Set of Circulating miRNA Biomarkers
## Summary
This study presents Serum-MiR-CanPred, a machine learning framework that leverages serum microRNA (miRNA) expression data to non-invasively diagnose 13 different cancer types. Using a multilayer perceptron (MLP) model and SHAP for interpretability, the method achieved high accuracy (AUC 99.87%) and identified key discriminatory miRNAs, including hsa-miR-5100. Literature validation and molecular docking revealed that AC1MMYR2, a compound targeting the Dicer site, binds stably to pre-miR-5100, suggesting therapeutic potential. This integrative approach demonstrates the dual utility of circulating miRNAs as diagnostic biomarkers and therapeutic targets, offering a promising direction for AI-driven, non-invasive cancer diagnostics and drug discovery.
## Dataset
- Source: [GEO Database] (https://www.ncbi.nlm.nih.gov/geo/)
- GEO Accession: GSE212211, GSE113740, GSE211692, GSE164174
- Preprocessing: To determine the presence of miRNA, sample signals were compared against a threshold established from blank signals. After removing the top and bottom 5% of blank signal intensities, and the threshold was calculated as the mean plus two standard deviations of the remaining values. For detected miRNA, the mean of the filtered blank signals was subtracted from the sample intensities. Signals that were not detected were assigned a value of 0.1 on a log2 scale. Data normalization was performed using internal control miRNAs (hsa-miR-149-3p, hsa-miR-2861, hsa-miR-4463). Batch effects were adjusted using PyComBat, and the dataset was labelled and combined for further analysis.
## Training Details
- Number of Layers: 3
- Units: 512
- Batch Size: 256
- Epochs: 100
- Dropout Rate: 0.4
- Learning Rate: 0.00032506541805349084
- Hardware: NVIDIA GeForce RTX 3050
## Performance
The model was validated on 4055 samples (20% of the dataset).
| Class | Precision | Recall | F1-Score | Support |
|-------|-----------|--------|----------|---------|
| BL | 0.89 | 0.85 | 0.87 | 85 |
| BR | 0.94 | 0.97 | 0.96 | 140 |
| BT | 0.80 | 0.84 | 0.82 | 94 |
| CR | 0.91 | 0.87 | 0.89 | 334 |
| ES | 0.89 | 0.89 | 0.89 | 128 |
| GA | 0.97 | 0.99 | 0.98 | 572 |
| HC | 0.95 | 0.95 | 0.95 | 215 |
| LU | 0.93 | 0.94 | 0.94 | 345 |
| NC | 0.99 | 0.99 | 0.99 | 1606 |
| OV | 0.84 | 0.87 | 0.86 | 85 |
| PA | 0.98 | 0.91 | 0.94 | 175 |
| PR | 0.94 | 0.98 | 0.96 | 211 |
| SA | 0.88 | 0.91 | 0.89 | 65 |
| | | | | |
| macro avg | 0.92 | 0.92 | 0.92 | 4055 |
| weighted avg | 0.96 | 0.96 | 0.96 | 4055 |
Overall Accuracy = 96%
## Citation
If using this model, please cite:
## License
MIT License |
Serialtechlab/mms-tts-div-finetuned-md-f01 | Serialtechlab | 2025-05-21T10:17:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"dv",
"dataset:alakxender/dv_syn_speech_md",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-21T09:59:11Z | ---
library_name: transformers
datasets:
- alakxender/dv_syn_speech_md
language:
- dv
pipeline_tag: text-to-audio
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
farleimeiyun/zxcvzxcv | farleimeiyun | 2025-05-21T10:14:17Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-05-21T10:14:17Z | ---
license: bigcode-openrail-m
---
|
vmpsergio/536e09d0-3aeb-4925-90d3-3eab8b8192b7 | vmpsergio | 2025-05-21T10:08:30Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:quantized:teknium/OpenHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-21T09:01:02Z | ---
base_model: teknium/OpenHermes-2.5-Mistral-7B
library_name: transformers
model_name: 536e09d0-3aeb-4925-90d3-3eab8b8192b7
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 536e09d0-3aeb-4925-90d3-3eab8b8192b7
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vmpsergio/536e09d0-3aeb-4925-90d3-3eab8b8192b7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/886uuvbt)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jinliuxi/nlp_text_4b | jinliuxi | 2025-05-21T10:07:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T10:07:27Z | ---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jinliuxi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ryota-komatsu/Llama-for-SpeechLM-Instruct | ryota-komatsu | 2025-05-21T10:07:02Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"llama_for_speech_lm",
"en",
"dataset:ryota-komatsu/spoken-alpaca",
"dataset:openslr/librispeech_asr",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-05-15T15:49:43Z | ---
library_name: transformers
license: llama3.2
datasets:
- ryota-komatsu/spoken-alpaca
- openslr/librispeech_asr
language:
- en
base_model:
- ryota-komatsu/Llama-for-SpeechLM
- meta-llama/Llama-3.2-1B-Instruct
- openai/whisper-small.en
---
# LlamaForSpeechLM-Instruct - Built with Llama
<!-- Provide a quick summary of what the model is/does. -->
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [repo](https://github.com/ryota-komatsu/slp2025) |
xw17/Qwen2-1.5B-Instruct_finetuned_2_optimized1_task_grouping_off_FT | xw17 | 2025-05-21T10:06:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T10:04:34Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shubh7/T5-Small-FineTuned-TexttoSql | shubh7 | 2025-05-21T10:04:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"t5",
"text-to-sql",
"lora",
"en",
"dataset:your_dataset_here",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T06:39:31Z | ---
language: en
license: apache-2.0
tags:
- t5
- text-to-sql
- lora
- peft
datasets:
- your_dataset_here
metrics:
- bleu
- rouge
---
# T5-Small Fine-Tuned for Text-to-SQL with LoRA
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on text-to-SQL data using LoRA (Low-Rank Adaptation).
## Model description
The model was trained using PEFT's LoRA implementation with the following configuration:
- rank (r): 8
- lora_alpha: 32
- Target modules: query and value projection matrices in attention
## Intended uses & limitations
This model is intended to be used for converting natural language queries to SQL statements.
## Training procedure
The model was trained using the following hyperparameters:
- Learning rate: 1e-3
- Batch size: 8
- Training steps: 10,000
- Optimizer: AdamW
- LR scheduler: constant
## Limitations and bias
This model inherits the limitations and biases from the original T5-small model.
|
ibrahimbukhariLingua/qwen2.5-1.5b-en-wikipedia-finance-100-v1 | ibrahimbukhariLingua | 2025-05-21T10:03:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T10:03:32Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: qwen2.5-1.5b-en-wikipedia-finance-100-v1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-1.5b-en-wikipedia-finance-100-v1
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ibrahimbukhariLingua/qwen2.5-1.5b-en-wikipedia-finance-100-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
liserman/manifestoberta_domains_easyread_german_context_2025-5-20 | liserman | 2025-05-21T10:02:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-classification | 2025-05-21T10:01:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rinrikatoki/dorna-finetuned-diabetes | rinrikatoki | 2025-05-21T10:02:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:PartAI/Dorna-Llama3-8B-Instruct",
"base_model:adapter:PartAI/Dorna-Llama3-8B-Instruct",
"region:us"
] | null | 2025-05-21T10:01:57Z | ---
base_model: PartAI/Dorna-Llama3-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
xw17/Qwen2-1.5B-Instruct_finetuned_1_optimized1_task_grouping_off_FT | xw17 | 2025-05-21T10:01:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T09:59:51Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PaceKW/canine-c-multilabel-indonesian-hate-speech-new-label | PaceKW | 2025-05-21T10:00:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"canine",
"text-classification",
"generated_from_trainer",
"base_model:google/canine-c",
"base_model:finetune:google/canine-c",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T09:57:32Z | ---
library_name: transformers
license: apache-2.0
base_model: google/canine-c
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: canine-c-multilabel-indonesian-hate-speech-new-label
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-c-multilabel-indonesian-hate-speech-new-label
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6909
- F1: 0.6983
- Roc Auc: 0.4999
- Accuracy: 0.0606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 151 | 0.6909 | 0.6983 | 0.4999 | 0.0606 |
| No log | 2.0 | 302 | 0.6909 | 0.6972 | 0.4994 | 0.0581 |
| No log | 3.0 | 453 | 0.6916 | 0.6566 | 0.5036 | 0.0274 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Subsets and Splits