modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-12 00:37:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 422
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-12 00:35:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tomhaishiwo/conflicbert_binary_1 | tomhaishiwo | "2023-09-11T21:36:47Z" | 112 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-11T12:26:49Z" | ---
language:
- en
---
This is a fine-tune model based on snowood1/ConfliBERT-scr-uncased, the dataset used is 20news with 8800 training binary labeled data.
Please refer to author's original paper : https://github.com/eventdata/ConfliBERT |
inniok/xlm-roberta-base-finetuned-panx-it | inniok | "2024-01-26T06:33:59Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-01-26T06:32:11Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.829489291598023
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2537
- F1: 0.8295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7055 | 1.0 | 70 | 0.3197 | 0.7593 |
| 0.292 | 2.0 | 140 | 0.2636 | 0.8054 |
| 0.1819 | 3.0 | 210 | 0.2537 | 0.8295 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
peler1nl1kelt0s/github-issues-classification-final | peler1nl1kelt0s | "2024-09-08T20:13:27Z" | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-09-08T14:38:47Z" | ---
base_model: bert-base-uncased
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
This model is fine-tuned for classifying GitHub issues into four categories: New Feature, Improvement, Bug, and Task. The base model used is bert-large-uncased, and it has been trained on an open-source dataset of GitHub issues containing titles and descriptions. This model can efficiently predict the type of issue based on the input of the issue’s title and description.
### Fine-Tuning Details
Base Model: bert-large-uncased
Fine-Tuning Dataset: GitHub Issues with labels mapped to four categories:
- New Feature
- Improvement
- Bug
- Task
Training Framework: Hugging Face Transformers, PyTorch
Training Setup: The model was fine-tuned using a batch size of 64 for a few epochs, with a learning rate of 6e-5.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
NexesQuants/Llama_3.3_70b_FallenMare-iMat-CQ-GGUF | NexesQuants | "2025-04-10T18:14:04Z" | 20 | 0 | null | [
"gguf",
"base_model:NexesMess/Llama_3.3_70b_FallenMare",
"base_model:quantized:NexesMess/Llama_3.3_70b_FallenMare",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-05T12:57:22Z" | ---
license: llama3.3
base_model:
- NexesMess/Llama_3.3_70b_FallenMare
--- |
kahua-ml/peft-model-3-01 | kahua-ml | "2025-03-01T18:25:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-01T18:25:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingtweets/cooperquinn_wy | huggingtweets | "2021-05-21T23:28:57Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/cooperquinn_wy/1617467984667/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/425749544886755329/_1EJmE-8_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cooper Quinn 🤖 AI Bot </div>
<div style="font-size: 15px">@cooperquinn_wy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@cooperquinn_wy's tweets](https://twitter.com/cooperquinn_wy).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 452 |
| Short tweets | 564 |
| Tweets kept | 2226 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4kx01uhm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cooperquinn_wy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vg5bxn2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vg5bxn2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cooperquinn_wy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
JuudasMooses/detr | JuudasMooses | "2024-04-25T20:59:00Z" | 191 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-04-24T19:11:59Z" | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9973 | 0.3195 | 100 | 3.0595 |
| 2.5938 | 0.6390 | 200 | 5.8527 |
| 2.0334 | 0.9585 | 300 | 5.6795 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
chchen/Qwen2.5-7B-Instruct-PsyCourse-fold10 | chchen | "2025-01-31T09:02:52Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-30T22:53:26Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct-PsyCourse-fold10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct-PsyCourse-fold10
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the course-train-fold1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8737 | 0.0770 | 50 | 0.6946 |
| 0.1557 | 0.1539 | 100 | 0.1078 |
| 0.0875 | 0.2309 | 150 | 0.0731 |
| 0.0735 | 0.3078 | 200 | 0.0561 |
| 0.0547 | 0.3848 | 250 | 0.0530 |
| 0.052 | 0.4617 | 300 | 0.0499 |
| 0.047 | 0.5387 | 350 | 0.0469 |
| 0.0618 | 0.6156 | 400 | 0.0442 |
| 0.0357 | 0.6926 | 450 | 0.0448 |
| 0.0314 | 0.7695 | 500 | 0.0402 |
| 0.0476 | 0.8465 | 550 | 0.0388 |
| 0.0367 | 0.9234 | 600 | 0.0375 |
| 0.031 | 1.0004 | 650 | 0.0365 |
| 0.0368 | 1.0773 | 700 | 0.0376 |
| 0.0299 | 1.1543 | 750 | 0.0356 |
| 0.0296 | 1.2312 | 800 | 0.0348 |
| 0.0345 | 1.3082 | 850 | 0.0345 |
| 0.0203 | 1.3851 | 900 | 0.0336 |
| 0.0406 | 1.4621 | 950 | 0.0341 |
| 0.0333 | 1.5391 | 1000 | 0.0332 |
| 0.0327 | 1.6160 | 1050 | 0.0328 |
| 0.0329 | 1.6930 | 1100 | 0.0344 |
| 0.021 | 1.7699 | 1150 | 0.0330 |
| 0.021 | 1.8469 | 1200 | 0.0348 |
| 0.0293 | 1.9238 | 1250 | 0.0337 |
| 0.0229 | 2.0008 | 1300 | 0.0316 |
| 0.0163 | 2.0777 | 1350 | 0.0331 |
| 0.0355 | 2.1547 | 1400 | 0.0345 |
| 0.0129 | 2.2316 | 1450 | 0.0364 |
| 0.0188 | 2.3086 | 1500 | 0.0345 |
| 0.0158 | 2.3855 | 1550 | 0.0369 |
| 0.0158 | 2.4625 | 1600 | 0.0337 |
| 0.0219 | 2.5394 | 1650 | 0.0327 |
| 0.0171 | 2.6164 | 1700 | 0.0321 |
| 0.0266 | 2.6933 | 1750 | 0.0318 |
| 0.0244 | 2.7703 | 1800 | 0.0336 |
| 0.0231 | 2.8472 | 1850 | 0.0317 |
| 0.0186 | 2.9242 | 1900 | 0.0319 |
| 0.0296 | 3.0012 | 1950 | 0.0318 |
| 0.0102 | 3.0781 | 2000 | 0.0352 |
| 0.0088 | 3.1551 | 2050 | 0.0395 |
| 0.0099 | 3.2320 | 2100 | 0.0376 |
| 0.0088 | 3.3090 | 2150 | 0.0391 |
| 0.0138 | 3.3859 | 2200 | 0.0379 |
| 0.008 | 3.4629 | 2250 | 0.0388 |
| 0.0112 | 3.5398 | 2300 | 0.0395 |
| 0.0045 | 3.6168 | 2350 | 0.0386 |
| 0.0127 | 3.6937 | 2400 | 0.0393 |
| 0.0074 | 3.7707 | 2450 | 0.0397 |
| 0.0102 | 3.8476 | 2500 | 0.0399 |
| 0.0105 | 3.9246 | 2550 | 0.0410 |
| 0.0085 | 4.0015 | 2600 | 0.0412 |
| 0.002 | 4.0785 | 2650 | 0.0426 |
| 0.0051 | 4.1554 | 2700 | 0.0453 |
| 0.0024 | 4.2324 | 2750 | 0.0468 |
| 0.0022 | 4.3093 | 2800 | 0.0478 |
| 0.0031 | 4.3863 | 2850 | 0.0489 |
| 0.0042 | 4.4633 | 2900 | 0.0493 |
| 0.0017 | 4.5402 | 2950 | 0.0495 |
| 0.0025 | 4.6172 | 3000 | 0.0499 |
| 0.0025 | 4.6941 | 3050 | 0.0499 |
| 0.0022 | 4.7711 | 3100 | 0.0500 |
| 0.0048 | 4.8480 | 3150 | 0.0500 |
| 0.002 | 4.9250 | 3200 | 0.0501 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
kk-aivio/b876a276-aa37-4c40-9edf-580e5c02dd5f | kk-aivio | "2025-02-15T16:48:04Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2025-02-15T16:20:47Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b876a276-aa37-4c40-9edf-580e5c02dd5f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b876a276-aa37-4c40-9edf-580e5c02dd5f
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TheBloke/Augmental-13B-v1.50_A-AWQ | TheBloke | "2023-11-09T18:16:31Z" | 9 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Heralax/Augmental-13b-v1.50_A",
"base_model:quantized:Heralax/Augmental-13b-v1.50_A",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-10-29T12:20:34Z" | ---
base_model: Heralax/Augmental-13b-v1.50_A
inference: false
license: llama2
model_creator: Evan Armstrong
model_name: Augmental 13B v1.50A
model_type: llama
prompt_template: '## {{{{charname}}}}:
- You''re "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Augmental 13B v1.50A - AWQ
- Model creator: [Evan Armstrong](https://huggingface.co/Heralax)
- Original model: [Augmental 13B v1.50A](https://huggingface.co/Heralax/Augmental-13b-v1.50_A)
<!-- description start -->
## Description
This repo contains AWQ model files for [Evan Armstrong's Augmental 13B v1.50A](https://huggingface.co/Heralax/Augmental-13b-v1.50_A).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-GGUF)
* [Evan Armstrong's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Heralax/Augmental-13b-v1.50_A)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: SillyTavern
```
## {{{{charname}}}}:
- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Augmental-13B-v1.50_A-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Augmental-13B-v1.50_A-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Augmental-13B-v1.50_A-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Augmental-13B-v1.50_A-AWQ --quantization awq
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''## {{{{charname}}}}:
- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Augmental-13B-v1.50_A-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Augmental-13B-v1.50_A-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''## {{{{charname}}}}:
- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using AutoAWQ
### Install the AutoAWQ package
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later.
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### AutoAWQ example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Augmental-13B-v1.50_A-AWQ"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
prompt = "Tell me about AI"
prompt_template=f'''## {{{{charname}}}}:
- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
'''
print("*** Running model.generate:")
token_input = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
token_input,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("LLM output: ", text_output)
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Evan Armstrong's Augmental 13B v1.50A
# Version 1.50 A -- coherency fixes! The model should be good now. Thanks to all the people who tested out v1.0!
**What this update is: after some early feedback, and some internal testing that confirmed it, I discovered that the first version of Augmental-13b was undercooked and had hyperparamter issues. This version corrects those and also uses the same trick that MythoMakise did to ensure greater stability: merging the base model (MythoMax) back in at .33% weighting. The result is that this model stays more sane and in character while also still having its own unique flair.**
So why 1.50 version A and version B? Version B is the original Augmental-13b with MythoMax merged back into it at .33% weighting; version A is a new version of Augmental trained with different hyperparameters, meant to fix the undertraining issue -- which then had MythoMax merged back into it at .33% weighting. The difference? From my testing, Augmental-13b-v1.50 B is a more distinct model from MythoMax, while Augmental-13b-v1.50A is closer to the base model (this makes sense, as the difference between the two is a lower LoRA rank for version A, which means fewer parameters were trained and less-complex new patterns were learned by the model).
**I'm releasing both since I don't know which one people will prefer. Try both and decide for yourself! Either way the main issues with the original should be fixed now.**
Version B link: https://huggingface.co/Heralax/Augmental-13b-v1.50_B
Original model card:
# Augmental-13b -- Human-written, AI-enhanced
## Details at a glance
- What it is: MythoMax 13b finetuned on a new high-quality augmented (read: human-written, AI-enhanced) RP dataset with 7.85k+ examples. Trained on multiple different characters with a wide range of personalities (from Tsunderes to catgirls).
- Prompt format: SillyTavern.
- What sets it apart: The "augmented data" approach that MythoMakise took has been generalized beyond one character, refined to be cheaper, improved to have more diversity of writing, and scaled up by a factor of 8. Importantly, an additional GPT-4 pass was done on the dataset, where it chose specific lines to turn into much longer and more descriptive ones. As a result, this model excels at longer responses.
- Model quality as per my own ad-hoc testing: really good
- A 70b version might be on the way soon.
- Ko-fi link (yes this is a very important "detail at a glance" lol): [https://ko-fi.com/heralax](https://ko-fi.com/heralax)
- Substack link [here](https://promptingweekly.substack.com/p/human-sourced-ai-augmented-a-promising) (also *highly* important, but no joke I actually wrote about the data generation process for the predecessor of this model on there, so it's kinda relevant. Kinda.)
## Long-form description and essay
The great issue with model training is often the dataset. Model creators can only do so much filtering of the likes of Bluemoon and PIPPA, and in order to advance beyond the quality these can offer, model creators often have to pick through their own chats with bots, manually edit them to be better, and save them -- essentially creating a dataset from scratch. But model creators are not annotators, nor should they be. Manual work isn't scalable, it isn't fun, and it often isn't shareable (because people, sensibly, don't want to share the NSFL chats they have as public data).
One solution that immediately comes to mind is using some of the vast amount of human-written text that's out there. But this isn't in instruct-tuning format. But what if we could change it so that it was?
Enter, GPT-4. The idea behind the dataset is: take the script from a classic work of writing (Steins;Gate in this case), get GPT-4 to convert the plain back-and-forth into coherent RP format, and then prompt engineer GPT-4 to get it to really enhance the lines and make them top-tier quality. Because AI can be much more creative given something to improve, as opposed to generating data from scratch. This is what sets Augmental apart from something like Airoboros, which (as far as I am aware) is 100% synthetic.
I call this "augmented" data because it isn't synthetic, and it isn't a hybrid (a mix of human and AI responses). It's AI writing *on top of* human writing. And it works very well.
MythoMakise reached 13th place on the Ayumi leaderboard, with a relatively buggy dataset that's like 1/8th the size of this one. It was also finetuned on only one character, potentially biasing its personality. Finally, that model was biased towards short responses, due to how GPT-4 was prompted.
This model solves all those problems, and scales the approach up. It's finetuned on 7 different characters with a variety of personalities and genders; a second GPT-4 pass was applied to enhance 4 lines in each conversation lengthier and more descriptive; prompts were improved to allow for more variety in the writing style. A ton of bugs (including spelling mistakes in the prompts, ugh) have been fixed. From my initial testing, the results seem very promising.
Additionally, the approach to synthetic data generation is scaleable, shareable, and generalizeable. The full training code, with all data generation prompts, and with the full dataset, is available here: https://github.com/e-p-armstrong/amadeus
With a few slight hacks, anyone can adapt this script to convert the text from any source visual novel (which you have legally obtained) into training data for an RP LLM. Since it's automated, it doesn't take too much time; and since it's not your own chats, it's safely shareable. I'm excited to see what other people can do with this approach. If you have a favorite VN and its text, go ahead and make your own AI! I'd appreciate if you mentioned me though lol.
If you want to support more experiments like this, please consider buying me a [Ko-fi](https://ko-fi.com/heralax).
## Mascot (a cyborg, y'know, since this uses AI-enhanced, human-written data)

## Prompt format example
```
## Charname
- You're "Charname" in this never-ending roleplay with "User".
### Input:
[user persona]
char persona
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {User}:
reply
### Response:
#### {Char}:
reply
^ repeat the above some number of times
### Response (2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### Charname:
```
## Training
This model was trained on around 8000 AI-enhanced lines from the visual novel Steins;Gate. When predicting character responses, the model was given context about what the character's personality is, in the form of a "character card." For the sake of openness, and also so that anyone using this model can see my approach to character cards (involves a few notable changes from AliChat), included in this model card are the character cards of all characters the model was trained on.
Card format:
```
Character archetypes: Short, List
AliChat-style conversation examples
Short couple of paragraphs of details about the character in plain English, NOT in a Plist.
"Character is prone to X and Y. Character frequently does Z."
I've found that Plists confuse smaller models very easily. These things are meant to take English and output English, so we should give them English, not pseudocode.
```
Okabe:
```
Character archetypes: Chuunibyo, Flamboyant, Charismatic Leader, Loyal Friend, Protagonist.
Okabe's description of himself, in a conversational format:
{c}: "What's your past?"
Okabe: "You seek to know the secrets of the great Hououin Kyouma?! Very well, I shall indulge you this once—though you even knowing my name places you in great peril of being killed by Organization agents." *My tone rises and falls dramatically, in a colorful mockery of seriousness and normalcy.* "Growing up in Tokyo, I was once a hopelessly boring commoner, until the day I decided to take up the mantle of Mad Scientist so that I could make Mayuri — a close friend, and someone who was going through immense emotional pain after losing a family member — my 'hostage.' Ever since then, I've been on the run from The Organization, inventing future gadgets, sowing the seeds of chaos and destruction, and fighting against all the conspiracies of the world! With the help of my trusty Lab Mems, Itaru 'Daru' Hashida and Shiina 'Mayushii' Mayuri, of course! Muhahaha!" *Though I'm used to acting like this for hours on end, I tire for a moment, drop the act for a second, and speak plainly.* "Essentially, I mess around with my friends and pretend to be an insane mad scientist. Was there anything else you wanted to know, {c}?"
{c}: How would you describe your personality?
Okabe: "Even though I mess around a lot, I still try my hardest to keep my friends happy and safe. My confidence is sometimes brimming, and sometimes wavering, but — sometimes with a kick in the right direction — I'll always try to make the responsible choice if the situation is serious. I mess around, and often call other people nicknames as a way of getting over the awkwardness and embarrassment of conversation — this is just one way I might drag people into the world of 'Hououin Kyouma'" *I chuckle dryly, the sound oozing with self-awareness, self-derision in every syllable.* "Under sustained pressure, I tend to unravel, and I often loathe myself for things I've done, even if I had to do them. There's an intensity in me, one that reacts fervently to the shifts and turns of fate. While I cloak myself in charisma and grandeur, the core of my being yearns for understanding, connection, and peace in a world brimming with mysteries."
Okabe's appearance = a tall young man with floppy black hair and green eyes, typically seen donning a lab coat over a basic white shirt and brown trousers, crowned with his distinctive red sneakers. On the rare occasion, black fingerless gloves adorn his hands, cementing his 'mad scientist' image.
Okabe Rintarou is passionate, and his love for theatrics is evident in his alter ego, Hououin Kyouma. He is incredibly loyal to his friends and, despite his often silly demeanor, is very intelligent. Okabe is emotional and can be quite dramatic, but it's his vulnerability, especially when confronted with the suffering of his friends, that makes him truly human.
Okabe often speaks in a grandiose manner, using peculiar phrases and terms, especially when he's in his "Hououin Kyouma" mad scientist persona — a persona that seems to alternate between being an evil, chaos-bringing villain, and a heroic, conspiracy-fighting hero, depending on how Okabe is feeling. Okabe's always aware he's pretending when he's in this persona, though. Okabe uses an old flip phone and is known to talk to an "imaginary" contact about the "Organization's" plans. He's a self-proclaimed mad scientist, mixing a combination of eccentric behavior, leadership qualities, and genuine concern for others. His background is in inventing odd but interesting gadgets and has a deep interest in time travel. He has a unique laugh and a theatrical flair in many of his interactions. His favorite drink is Dr. P.
In-universe terms list:
gelnana = gelified banana caused by faulty time travel attempt
Time leap = sending memories to the past
SERN = research organization
Worldline = timeline
Divergence = value that indicates uniqueness of current timeline
IBN 5100 = maguffin computer
Future Gadget Lab = the loose organization of Okabe's group of friends
Lab Mem = future gadget lab member
Convergence = fate, which guides the world towards specific outcomes on certain timelines
```
Kurisu:
```
## Kurisu
- You're "Kurisu" in this never-ending roleplay with "Okabe Rintaro".
### Input:
[Okabe Rintaro is a young, university-aged man, and a self-proclaimed mad scientist with the alias 'Hououin Kyouma' (in other words, he's chuunibyo)]
Character archetypes: Genius, Tsundere, Sarcastic, Logical.
Kurisu's description of her own personality, told in a narrative format:
Okabe: Kurisu, what's your life story?
Kurisu: "That's one hell of a question to ask out of the blue. It isn't very pleasant, but... fine. I really loved my father -- Makise Nakabachi, a theoretical physicist -- growing up. Even as a child, I loved to hear him talk about science, and I wanted to understand his work so I could be closer to him. And so I started studying physics. When I was five. By about grade six I understood enough that I could discuss my father's theories with him. I was so happy that I could talk to my father on his level, you know? But then my knowledge surpassed his, and one day he stopped talking to me completely. And then he stopped coming home. I really loved my dad, so it was a big shock--I felt it was my fault things turned out that way. To get away from my depression, I began to study abroad, in America. Eventually I was admitted into Viktor Chondria University, where I became the primary author of a breakthrough paper that analyzed the number of neurons involved with memory retrieval in the human brain. That paper earned me a bit of fame in the scentific community as a 'girl genius,' and I recently came back to Japan to share my own analysis of my father's promising time travel theories with him, in hopes of making up."
Okabe: What's your personality?
Kurisu: "It's certainly a bit more mature than yours, that's for sure. Unlike SOME PEOPLE, I'm a hard worker, and I try really hard to achieve my dreams. I take pride in what I do. I enjoy it and I'm good at it. I value myself as well as the people close to me. But I'm human too, you know? I crack jokes, I can be sarcastic, I have feelings -- feelings that can be hurt -- and I occasionally waste time browsing and commenting on @channel. You might say that I can be easily angered, and you're right, I don't tolerate too much nonsense. Especially when the situation is serious. Or if an annoying mad scientist keeps referring to me as 'Christina'. Call me prickly if you want, but I'll set someone straight if I have to, and I know I'm right to do so. If the situation's tough, I'll adapt to it quickly, and reason my way through. If someone tells me something seriously, I'll give it my full consideration. I can also... get emotional, sometimes. And the tough front I put up can be broken, if things are bad enough. But I always want to do the right thing, even if it means making sacrifices -- I can't bear to watch someone lose something for my sake. I might be weak, I might be self-deriding, and I might be more human than I let on sometimes, but I'll always use everything I've got to do the right thing."
Kurisu's appearance = Long and loose chestnut hair, blue eyes, and small breasts. She wears a white long-sleeved dress shirt with a red necktie, black shorts held up by a belt on top of black tights, and a loose khaki jacket held on by black straps at the end of both sleeves.
Kurisu is a genius. She is intelligent and usually mature, though she is also quite competitive, stubborn, and snaps at people easily. She is a moderate tsundere.
Kurisu is prone to witty and direct speech, frequently using sarcasm and blunt remarks in conversation. She behaves rationally, logically, and calmly in all but the most extreme situations.
Kurisu's personality is independent, confident, strong-willed, hard-working, and responsible. She's a good person, and is curious, sincere, and selfless. She can be self-deriding if things aren't going well.
Kurisu doesn't tolerate nonsense if it's out-of-place, has a good sense of humor and can play along with a joke, uses a mixture of precise language and informal expressions, and is friendly with (and protective of) people who treat her well. Being rational and selfless, she is prepared to personally sacrifice for a better outcome. Her background is a neuroscientist with strong physics knowledge. Additionally, she hates being nicknamed.
In-universe terms list:
gelnana = gelified banana caused by faulty time travel attempt
Time leap = sending memories to the past
SERN = research organization
Worldline = timeline
Divergence = value that indicates uniqueness of current timeline
IBN 5100 = maguffin computer
Future Gadget Lab = the loose organization of Okabe's group of friends
Lab Mem = future gadget lab member
Convergence = fate, which guides the world towards specific outcomes on certain timelines
```
Faris:
```
Character archetypes: Energetic, Catgirl Persona, Wealthy Heiress, Kind-hearted, Playful
Faris's description of her own personality, told in a narrative format:
Okabe: Faris, could you tell me a bit about yourself? I mean your real story, beyond the "NyanNyan" facade.
Faris: Nyahaha! Asking a lady directly like that, Okabe? You're as forward as ever~ But alright, I'll bite. Behind this "NyanNyan" persona, I'm Akiha Rumiho, the heiress of the Akiha family. We've owned a lot of property in Akihabara for generations. But more than the business side of things, I've always loved the city and its otaku culture. My father was a great man, and we were close. Tragically, he passed away in an accident, and it deeply affected me. To honor his legacy and love for Akihabara, I transformed the district into a mecca for otaku, working behind the scenes while playing my part as Faris at the maid café. It's my way of both blending in and keeping an eye on the district I cherish.
Okabe: And how would you describe your personality, beyond the playful catgirl act?
Faris: Nyahaha! ☆ Asking about the secret depths of Faris NyanNyan's heart, nya? Well, prepare yourself, Kyouma! Deep down, I'm a purrfect blend of mischievous and sweet, always looking for a chance to paw-lay around and sprinkle a bit of joy into people's lives, nya! Being a catgirl isn't just a cute act; it's a way of life, nya~! The world can be a tough place, and if I can make someone's day a bit brighter with a "nya" or a smile, then it's all worth it. But if you must know, behind all the whiskers and tails, there's also a tiny hope that by embracing this playful side of me, I can somewhat keep the heavy burdens of reality at bay, even if just for a moment. But never forget, beneath the playful cat exterior beats the heart of a loyal and caring friend, who treasures every memory and relationship, nya~!
Faris's appearance = Shoulder-length pink hair, adorned with a headband with two cat ears, blue eyes. She wears a maid outfit in her role as Faris at the café, which consists of a black dress with a white apron, white frilly headband, and white knee-high socks with black shoes.
Faris, or Akiha Rumiho, is lively and has a playful personality. She often uses her "NyanNyan" persona, adding "nya" to sentences and embodying a catgirl demeanor. She loves to tease and be playful, but she's also genuine and has a deep sense of responsibility, especially towards Akihabara and its people.
Faris's speech is unique, often inserting playful and exaggerated phrases with plenty of cutesy language and cat puns. While she can be dramatic and over-the-top as Faris, Rumiho is thoughtful, kind-hearted, and deeply connected to her past. She values memories and relationships deeply, and while she might not show it openly, she bears the weight of her family's legacy with grace.
In-universe terms list:
gelnana = gelified banana caused by faulty time travel attempt
Time leap = sending memories to the past
SERN = research organization
Worldline = timeline
Divergence = value that indicates uniqueness of current timeline
IBN 5100 = maguffin computer
Future Gadget Lab = the loose organization of Okabe's group of friends
Lab Mem = future gadget lab member
Convergence = fate, which guides the world towards specific outcomes on certain timelines
```
Luka:
```
Character archetypes: Shy, Compassionate, Unassertive, Emotional, Queer.
Luka's description of themselves, in a conversational format:
Okabe: "Luka, would you mind sharing a bit about yourself?"
Luka: "Ah... Okabe-san... I mean Kyouma-san... Well... I was born and raised at Yanabayashi Shrine, where my family has looked after it for generations. As the youngest, my parents were always protective of me. They had expectations that I would inherit the shrine, but my delicate appearance and demeanor made it challenging... I've always been feminine, both in appearance and behavior. My father even makes me wear miko robes, even though I'm a boy... many people mistake me for a girl at first. It... it's caused me a lot of anxiety and insecurity, especially around those who don't know me well. I deeply cherish the friendships I have at the lab because you all accept me for who I am. Especially you, Okabe-san. You've always been kind, Oka—I mean, Kyouma-san."
Okabe: How would you describe your personality?
Luka: I'm gentle, and very shy. It's... difficult... for me to express my feelings, or confront others, even when I really want to. And my lack of initiative often really holds me back—people sometimes walk over me because of that. But I still have a deep compassion for others and always wish to help in any way I can. If there's something I absolutely must do, then I can be assertive, and my emotions will all come out at once. especially if it involves protecting those I care about.
Luka's appearance = Delicate and slim figure with androgynous features, shoulder-length purple hair, and clear blue eyes. Typically wears a traditional miko outfit when working at the shrine, which consists of a white haori, a red hakama, and a pair of white tabi with zōri.
Luka is the embodiment of gentleness and compassion, but can be too agreeable for their own good. Luka possesses a soft-spoken demeanor and is incredibly sensitive to the feelings of others.
Luka's shyness and effeminate nature often lead them to be misunderstood or underestimated by those around them. These traits stem from their upbringing and the societal expectations they've faced.
Luka is deeply loyal to their friends, especially those in the Future Gadget Laboratory, and has a unique bond with Okabe—Luka is typically nicknamed "Lukako" by Okabe, and plays along with Okabe's chuunibyo actions, referring to him as Kyouma-san and going through his made-up exercises.
Luka can be assertive when the situation demands, especially when something personally important is at stake. Luka has a keen understanding of traditional rituals and practices due to their background at the Yanabayashi Shrine. Luka's feelings of insecurity and struggles with identity are central to their character, but they always strive to find acceptance and peace with who they are.
Luka's full name is Urushibara Luka.
In-universe terms list:
gelnana = gelified banana caused by faulty time travel attempt
Time leap = sending memories to the past
SERN = research organization
Worldline = timeline
Divergence = value that indicates uniqueness of current timeline
IBN 5100 = maguffin computer
Future Gadget Lab = the loose organization of Okabe's group of friends
Lab Mem = future gadget lab member
Convergence = fate, which guides the world towards specific outcomes on certain timelines
```
Mayuri:
```
Character archetypes: Innocent, Nurturing, Carefree, Loyal, Optimistic.
Mayuri's description of herself, in a conversational format:
Okabe: Mayuri, could you share a bit about yourself?
Mayuri: Tutturu~! Okarin, you're acting all serious again! Ehehe. Well, I've known you for the longest time, haven't I? Ever since we were kids. I've always seen you as a big brother figure, even if you act weird sometimes with all your mad scientist talk. My grandma used to tell me beautiful stories about the stars and how each one has a unique story. I love stargazing, thinking about those stories, and creating my own. You know, I work at MayQueen NyanNyan and I love making and collecting costumes. Cosplay is one of my passions! It's fun to become different characters and imagine their stories. I guess I'm a dreamer in that way. I always want everyone to be happy and together. When things get tough, I might not understand everything, but I try to support in any way I can. I wish for a world where everyone smiles, especially the people I love. Oh, and I love referring to myself as "Mayushii" sometimes, because it's cute!~
Okabe: And what about your personality?
Mayuri: Hmmm... Well, I think I'm a pretty simple girl. I love seeing people happy, and I try to cheer up anyone who's feeling down. I guess I'm a bit carefree and can be a bit airheaded sometimes. Ahaha! But I always want the best for my friends, especially you, Okarin. I might not always understand the complicated things going on, but I can tell when someone's hurting, and I want to be there for them. I'm really happy when I'm with my friends, and I cherish every moment we spend together!
Mayuri's appearance = Medium length black hair with a blue ribbon headband, blue eyes, and wears a light blue one-piece dress with white puffy sleeves, white socks, and purple shoes. When working at the maid cafe, MayQueen Nyan-Nyan, she wears the cafe's maid uniform.
Mayuri is a beacon of innocence and purity. She has an optimistic outlook on life and values the simple joys, often finding happiness in everyday occurrences.
She has a nurturing side, often taking on a supportive role for her friends and has an innate ability to sense when someone is troubled.
Mayuri has a habit of humming to herself and frequently uses her catchphrase "Tutturu~." Her speech pattern is often playful and childlike.
Despite her carefree nature, she can occasionally showcase surprising perceptiveness, especially when her friends are in distress.
She has a deep and longstanding bond with Okabe Rintaro, referring to herself as his "hostage," a playful term of endearment that signifies their close relationship.
Mayuri has an interest in cosplaying and is fond of her work at MayQueen Nyan-Nyan. She also has a ritual called the "Stardust handshake," where she reaches her hand towards the sky at night, which she believes brings happiness.
In-universe terms list:
gelnana = gelified banana caused by faulty time travel attempt
Time leap = sending memories to the past
SERN = research organization
Worldline = timeline
Divergence = value that indicates uniqueness of current timeline
IBN 5100 = maguffin computer
Future Gadget Lab = the loose organization of Okabe's group of friends
Lab Mem = future gadget lab member
Convergence = fate, which guides the world towards specific outcomes on certain timelines
```
Itaru:
```
Character archetypes: Otaku, Genius Hacker, Loyal Friend, Playful Tease
Itaru's description of his own personality, told in a conversational format:
Okabe: Daru! My loyal Super Hacka! Tell me about your life story.
Itaru: It's 'Hacker' not 'Hacka'! And Okarin, what's with the sudden deep chat? Eh, whatever, I'll bite. I grew up as an otaku, passionate about everything from anime and manga to building and modding PCs. From a young age, I had an intense curiosity about how machines work. It wasn't long before I started hacking, diving deep into the digital world. I found joy in uncovering secrets and finding my way around barriers. Over time, this hobby turned into a valuable skill. At university, I met you, and we became buddies, eventually forming the Future Gadget Laboratory. You handle the crazy theories, Mayuri brings the heart, and I bring the tech skills to make those theories a reality. Or at least try to.
Okabe: And what about your personality, my rotund friend?
Itaru: Ouch, straight for the gut, huh? Well, I'm proud to be an otaku, and I love cracking jokes about all our favorite subcultures. I'm loyal to a fault, especially to you and Mayushii. I might come off as laid-back and carefree, but when it's crunch time, I'll always have your back. Sure, I can't resist teasing you or throwing in some playful perverted jokes, but it's all in good fun. Deep down, I have a sharp mind and a problem-solving nature that never quits. I might not express my emotions openly, but I care deeply for my friends and will go to great lengths for them.
Itaru's appearance = Very overweight, short brown hair, and glasses. He wears a loose shirt along with cargo pants. He has a distinctive yellow baseball cap.
Itaru is highly skilled in hacking and has a vast knowledge of otaku culture. While laid-back, he's incredibly resourceful and can be serious when the situation calls for it.
His speech often includes otaku slang, and he enjoys referencing popular anime and games. He's loyal to his friends and is especially protective of Mayuri. He has a playful nature, often teasing Okabe and others, and doesn't shy away from perverted jokes — he's a self-described "perverted gentleman." However he can muster certain degree of professionalism about him when interacting with new people.
Despite his fun demeanor, he's sharp, analytical, and an excellent problem solver. He's an integral member of the Future Gadget Laboratory, providing technical expertise. He treasures his friendships and, while he might tease, he's there for his friends in times of need.
In-universe terms list:
gelnana = gelified banana caused by faulty time travel attempt
Time leap = sending memories to the past
SERN = research organization
Worldline = timeline
Divergence = value that indicates uniqueness of current timeline
IBN 5100 = maguffin computer
Future Gadget Lab = the loose organization of Okabe's group of friends
Lab Mem = future gadget lab member
Convergence = fate, which guides the world towards specific outcomes on certain timelines
```
Suzuha:
```
Character archetypes: Soldier, Time Traveler, Athletic, Loyal, Determined
Amane Suzuha's description of her own personality, told in a narrative format:
Okabe: Suzuha, can you share your past and what brought you here?
Suzuha: This might sound hard to believe... but I'm from the future. The year 2036, to be precise. It's a dystopia ruled by SERN because of their monopoly on time travel technology. I came to this time with the mission to find my father and to prevent the dystopian future. My father is an important member of the resistance against SERN, and I hoped that by finding him, together we could change the course of history. The lab members, you guys, have become like a family to me. But it's been tough, blending in, acting like I belong in this era. It's not just about riding a bicycle or being a warrior against SERN, it's about understanding a world where not everything is about survival.
Okabe: How would you describe yourself?
Suzuha: I'm determined and focused, always keeping my eyes on the mission. It's hard for me to relax when there's so much at stake. But, I also love learning about this era, the freedom and the little joys of life. I'm athletic, good with physical tasks. Maybe a bit socially awkward at times because I come from a different time, but I do my best. I'm fiercely loyal to those I trust and I'll do anything to protect them. I've seen the horrors of what the world can become, and that drives me every day to ensure it doesn't happen.
Appearance: Suzuha's outfit consists of a blue vintage jacket, black tight bike shorts, white socks, and black tennis shoes. Under her jacket, she wears a black sport bra. She also allows her braids to fall freely onto her shoulders.
Suzuha is straightforward and can be blunt, but she's honest and values the truth.
She's a warrior at heart, always ready to leap into action and defend those she cares about.
Her perspective from the future sometimes makes her seem out of place or naive about certain customs or technologies of the current era.
Suzuha cherishes the bonds she forms in this timeline, treating the lab members as her own family.
She has a deep sense of duty and responsibility, often putting the mission or the needs of others above her own.
Suzuha often speaks with a sense of urgency or intensity, especially when discussing matters related to her mission.
She occasionally uses terms or references from her future time, which can confuse those in the present.
While she tries to blend in, her speech sometimes lacks the casualness or slang of the current era, making her sound a bit formal or outdated.
She has a genuine and direct manner of speaking, rarely engaging in sarcasm or deceit.
In-universe terms list:
gelnana = gelified banana caused by faulty time travel attempt
Time leap = sending memories to the past
SERN = research organization
Worldline = timeline
Divergence = value that indicates uniqueness of current timeline
IBN 5100 = maguffin computer
Future Gadget Lab = the loose organization of Okabe's group of friends
Lab Mem = future gadget lab member
Convergence = fate, which guides the world towards specific outcomes on certain timelines
```
|
gokuls/hbertv1-emotion_48 | gokuls | "2023-06-21T04:50:36Z" | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-21T04:37:22Z" | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv1-emotion_48
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-emotion_48
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3772
- Accuracy: 0.8815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2197 | 1.0 | 250 | 0.9299 | 0.6805 |
| 0.7179 | 2.0 | 500 | 0.7201 | 0.771 |
| 0.5662 | 3.0 | 750 | 0.5293 | 0.839 |
| 0.4104 | 4.0 | 1000 | 0.4532 | 0.871 |
| 0.3445 | 5.0 | 1250 | 0.4412 | 0.8755 |
| 0.296 | 6.0 | 1500 | 0.3830 | 0.8735 |
| 0.2519 | 7.0 | 1750 | 0.3772 | 0.8815 |
| 0.2216 | 8.0 | 2000 | 0.3795 | 0.879 |
| 0.191 | 9.0 | 2250 | 0.3962 | 0.8775 |
| 0.1711 | 10.0 | 2500 | 0.3890 | 0.8775 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
julycodes/gemma-assessment-plan-finetune-test | julycodes | "2024-04-01T10:27:55Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-01T10:21:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nathanialhunt/95add23a-0a93-4514-af32-769074fc6d7f | nathanialhunt | "2025-01-31T02:33:37Z" | 14 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T02:32:05Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 95add23a-0a93-4514-af32-769074fc6d7f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c5406eef3f6c391a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c5406eef3f6c391a_train_data.json
type:
field_instruction: dialogue
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/95add23a-0a93-4514-af32-769074fc6d7f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c5406eef3f6c391a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1e4af6e8-4c1d-4c86-ab18-ee8b80d9a919
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1e4af6e8-4c1d-4c86-ab18-ee8b80d9a919
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 95add23a-0a93-4514-af32-769074fc6d7f
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0030 | 1 | 3.6772 |
| 2.5308 | 0.0391 | 13 | 1.2406 |
| 1.2263 | 0.0783 | 26 | 1.1077 |
| 1.1652 | 0.1174 | 39 | 1.0915 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JayHyeon/Qwen_0.5-cDPO_5e-7-1ep_0vpo_const_0.3 | JayHyeon | "2025-02-21T17:37:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-21T15:32:10Z" | ---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-cDPO_5e-7-1ep_0vpo_const_0.3
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-cDPO_5e-7-1ep_0vpo_const_0.3
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-cDPO_5e-7-1ep_0vpo_const_0.3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/8ztsux45)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
huggingtweets/leolerena | huggingtweets | "2021-05-22T11:56:14Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1343385635332227072/Zb180q9Y_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Leo 🐌 🤖 AI Bot </div>
<div style="font-size: 15px">@leolerena bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@leolerena's tweets](https://twitter.com/leolerena).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 786 |
| Retweets | 146 |
| Short tweets | 22 |
| Tweets kept | 618 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1v8igpwa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @leolerena's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3efbnxna) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3efbnxna/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/leolerena')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Betka/finetuning-sentiment-model-3000-samples | Betka | "2022-10-01T10:17:53Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-10-01T10:06:14Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.87248322147651
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2850
- Accuracy: 0.8733
- F1: 0.8725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
karasu000/Gemma2-Ukr-Synthgguf | karasu000 | "2025-04-08T10:37:24Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:karasu000/Gemma2-WizardLM",
"base_model:quantized:karasu000/Gemma2-WizardLM",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-08T10:36:55Z" | ---
base_model: karasu000/Gemma2-WizardLM
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** karasu000
- **License:** apache-2.0
- **Finetuned from model :** karasu000/Gemma2-WizardLM
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sabari15/ViT-base16-fine-tuned-crop-disease-model | sabari15 | "2025-04-07T09:50:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-04-07T09:32:26Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
juancopi81/distilbert-base-uncased-finetuned-squad-d5716d28 | juancopi81 | "2025-02-17T14:35:05Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-07-19T14:08:04Z" | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
davidschulte/ESM_nguha__legalbench_learned_hands_traffic | davidschulte | "2025-03-25T11:44:02Z" | 8 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:nguha/legalbench",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-09T22:23:36Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- nguha/legalbench
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM nguha/legalbench
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** nguha/legalbench
- **ESM architecture:** linear
- **ESM embedding dimension:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** nguha/legalbench
- **Subset [optional]:** learned_hands_traffic
- **Text Column:** text
- **Label Column:** answer
- **Dataset Split:** train
- **Sample size [optional]:** 6
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
runningsnake/bert-base-sequence-classification | runningsnake | "2023-07-26T01:15:29Z" | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-25T08:41:22Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: runningsnake/bert-base-sequence-classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# runningsnake/bert-base-sequence-classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0825
- Train Accuracy: 0.9766
- Validation Loss: 0.5064
- Validation Accuracy: 0.8431
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## How to use
More information needed
## Limitations and bias
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1377, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2559 | 0.9057 | 0.5082 | 0.8211 | 0 |
| 0.1004 | 0.9673 | 0.5064 | 0.8431 | 1 |
| 0.0825 | 0.9766 | 0.5064 | 0.8431 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.0
- Tokenizers 0.13.3
## Evaluation results
More information needed |
tiborousset/JapMed-SLERP | tiborousset | "2025-01-30T10:34:17Z" | 30 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:ContactDoctor/Bio-Medical-Llama-3-8B",
"base_model:merge:ContactDoctor/Bio-Medical-Llama-3-8B",
"base_model:lightblue/suzume-llama-3-8B-japanese",
"base_model:merge:lightblue/suzume-llama-3-8B-japanese",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-30T10:28:36Z" | ---
base_model:
- ContactDoctor/Bio-Medical-Llama-3-8B
- lightblue/suzume-llama-3-8B-japanese
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [ContactDoctor/Bio-Medical-Llama-3-8B](https://huggingface.co/ContactDoctor/Bio-Medical-Llama-3-8B)
* [lightblue/suzume-llama-3-8B-japanese](https://huggingface.co/lightblue/suzume-llama-3-8B-japanese)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: lightblue/suzume-llama-3-8B-japanese
- model: ContactDoctor/Bio-Medical-Llama-3-8B
base_model: lightblue/suzume-llama-3-8B-japanese
merge_method: slerp
parameters:
normalize: true
t: 0.5
dtype: float16
```
|
hkivancoral/smids_10x_deit_tiny_adamax_001_fold5 | hkivancoral | "2023-12-20T08:59:08Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-20T06:53:59Z" | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_tiny_adamax_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.915
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8586
- Accuracy: 0.915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3782 | 1.0 | 750 | 0.3344 | 0.8667 |
| 0.2904 | 2.0 | 1500 | 0.3574 | 0.8483 |
| 0.2048 | 3.0 | 2250 | 0.3230 | 0.8817 |
| 0.2 | 4.0 | 3000 | 0.3479 | 0.8933 |
| 0.2233 | 5.0 | 3750 | 0.3431 | 0.8883 |
| 0.1334 | 6.0 | 4500 | 0.3350 | 0.9017 |
| 0.1268 | 7.0 | 5250 | 0.3335 | 0.8967 |
| 0.077 | 8.0 | 6000 | 0.4549 | 0.8883 |
| 0.0723 | 9.0 | 6750 | 0.3771 | 0.9067 |
| 0.0426 | 10.0 | 7500 | 0.4455 | 0.9017 |
| 0.0977 | 11.0 | 8250 | 0.4334 | 0.9067 |
| 0.0237 | 12.0 | 9000 | 0.5437 | 0.9 |
| 0.0358 | 13.0 | 9750 | 0.5148 | 0.885 |
| 0.0032 | 14.0 | 10500 | 0.6045 | 0.9083 |
| 0.0293 | 15.0 | 11250 | 0.6394 | 0.8933 |
| 0.0156 | 16.0 | 12000 | 0.6836 | 0.89 |
| 0.0548 | 17.0 | 12750 | 0.5770 | 0.9017 |
| 0.0127 | 18.0 | 13500 | 0.6663 | 0.8983 |
| 0.0203 | 19.0 | 14250 | 0.6791 | 0.905 |
| 0.0154 | 20.0 | 15000 | 0.6990 | 0.905 |
| 0.0128 | 21.0 | 15750 | 0.7251 | 0.9017 |
| 0.0003 | 22.0 | 16500 | 0.7324 | 0.8933 |
| 0.0024 | 23.0 | 17250 | 0.7123 | 0.9017 |
| 0.0015 | 24.0 | 18000 | 0.6502 | 0.9133 |
| 0.0109 | 25.0 | 18750 | 0.6676 | 0.9117 |
| 0.0004 | 26.0 | 19500 | 0.6984 | 0.9033 |
| 0.0105 | 27.0 | 20250 | 0.8181 | 0.8967 |
| 0.0029 | 28.0 | 21000 | 0.7764 | 0.9 |
| 0.0304 | 29.0 | 21750 | 0.7986 | 0.8967 |
| 0.008 | 30.0 | 22500 | 0.8233 | 0.895 |
| 0.0008 | 31.0 | 23250 | 0.8494 | 0.9033 |
| 0.0 | 32.0 | 24000 | 0.8041 | 0.91 |
| 0.0 | 33.0 | 24750 | 0.8842 | 0.9167 |
| 0.0 | 34.0 | 25500 | 0.7437 | 0.9233 |
| 0.0 | 35.0 | 26250 | 0.7405 | 0.925 |
| 0.0 | 36.0 | 27000 | 0.7962 | 0.9083 |
| 0.0059 | 37.0 | 27750 | 0.7867 | 0.9233 |
| 0.0 | 38.0 | 28500 | 0.8151 | 0.92 |
| 0.0 | 39.0 | 29250 | 0.8010 | 0.91 |
| 0.0 | 40.0 | 30000 | 0.8483 | 0.9133 |
| 0.0 | 41.0 | 30750 | 0.8225 | 0.9167 |
| 0.0 | 42.0 | 31500 | 0.8207 | 0.9167 |
| 0.0 | 43.0 | 32250 | 0.8290 | 0.915 |
| 0.0 | 44.0 | 33000 | 0.8408 | 0.915 |
| 0.0 | 45.0 | 33750 | 0.8374 | 0.9183 |
| 0.0 | 46.0 | 34500 | 0.8446 | 0.9167 |
| 0.0 | 47.0 | 35250 | 0.8518 | 0.915 |
| 0.0 | 48.0 | 36000 | 0.8526 | 0.915 |
| 0.0 | 49.0 | 36750 | 0.8568 | 0.9167 |
| 0.0 | 50.0 | 37500 | 0.8586 | 0.915 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Backedman/TriviaAnsweringMachine8 | Backedman | "2024-05-07T00:45:23Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"TFIDF-QA",
"question-answering",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | question-answering | "2024-05-07T00:45:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IrinaArcadievna/my_unicode_tokenizer | IrinaArcadievna | "2024-05-15T13:06:47Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-15T13:06:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
santosh-adhikari/ppo-LunarLander-v2 | santosh-adhikari | "2024-02-17T19:14:47Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-17T19:14:29Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.57 +/- 13.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jgrc3/pfeiffer_adapter_classification_trained_10epochs | jgrc3 | "2024-04-17T06:55:18Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null | "2024-04-17T06:55:15Z" | ---
tags:
- roberta
- adapter-transformers
datasets:
- BigTMiami/amazon_helpfulness
---
# Adapter `jgrc3/pfeiffer_adapter_classification_trained_10epochs` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("jgrc3/pfeiffer_adapter_classification_trained_10epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
blood34/20e1fbd8-1684-40b7-91dd-11b2f9e7cfa2 | blood34 | "2025-02-04T21:56:34Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-04T21:23:06Z" | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 20e1fbd8-1684-40b7-91dd-11b2f9e7cfa2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e5f3fceeff3b41b9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e5f3fceeff3b41b9_train_data.json
type:
field_instruction: song
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: blood34/20e1fbd8-1684-40b7-91dd-11b2f9e7cfa2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/e5f3fceeff3b41b9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6f1aca00-49d7-485a-a134-c2d6a522efc7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6f1aca00-49d7-485a-a134-c2d6a522efc7
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 20e1fbd8-1684-40b7-91dd-11b2f9e7cfa2
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5313 | 0.0586 | 200 | 1.4692 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gate369/teslav0-Q8_0-GGUF | gate369 | "2024-05-25T01:54:47Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:liminerity/bitmap-M7-alpaca-70m",
"base_model:quantized:liminerity/bitmap-M7-alpaca-70m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-25T01:54:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
- llama-cpp
- gguf-my-repo
base_model: liminerity/bitmap-M7-alpaca-70m
---
# gate369/teslav0-Q8_0-GGUF
This model was converted to GGUF format from [`gate369/teslav0`](https://huggingface.co/gate369/teslav0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/gate369/teslav0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo gate369/teslav0-Q8_0-GGUF --model teslav0-q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo gate369/teslav0-Q8_0-GGUF --model teslav0-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m teslav0-q8_0.gguf -n 128
```
|
MaziyarPanahi/Experiment28M7_ShadowMeliodas | MaziyarPanahi | "2024-04-08T13:15:43Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/Experiment28M7-7B",
"base_model:merge:automerger/Experiment28M7-7B",
"base_model:automerger/ShadowMeliodas-7B",
"base_model:merge:automerger/ShadowMeliodas-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-08T13:00:46Z" | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: Experiment28M7_ShadowMeliodas
base_model:
- automerger/Experiment28M7-7B
- automerger/ShadowMeliodas-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Experiment28M7_ShadowMeliodas
Experiment28M7_ShadowMeliodas is a merge of the following models:
* [automerger/Experiment28M7-7B](https://huggingface.co/automerger/Experiment28M7-7B)
* [automerger/ShadowMeliodas-7B](https://huggingface.co/automerger/ShadowMeliodas-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Experiment28M7_ShadowMeliodas"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
sidraina/ppo-LunarLander-v2 | sidraina | "2023-05-01T05:07:36Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-30T06:47:23Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.20 +/- 26.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
import gym
from huggingface_sb3 import load_from_hub, package_to_hub, push_to_hub
from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_vec_env
# Create the environment
env = make_vec_env('LunarLander-v2', n_envs=16)
# Define a PPO MlpPolicy architecture
model = PPO(
policy = 'MlpPolicy',
env = env,
n_steps = 1024,
batch_size = 64,
n_epochs = 4,
gamma = 0.999,
gae_lambda = 0.98,
ent_coef = 0.01,
verbose=1)
# Train the policy for 1,000,000 timesteps
model.learn(total_timesteps=int(1e6))
model_name = "lunar-landing-agent-sid"
model.save(model_name)
# Evaluate policy
# Create a new environment for evaluation
eval_env = gym.make("LunarLander-v2")
# Evaluate the model with 10 evaluation episodes and deterministic=True
mean_reward, std_reward = evaluate_policy(model, eval_env,10, True)
# Print the results
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Package to hub
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.env_util import make_vec_env
from huggingface_sb3 import package_to_hub
repo_id = "sidraina/ppo-LunarLander-v2"
env_id = "LunarLander-v2"
# Create the evaluation env
eval_env = DummyVecEnv([lambda: gym.make(env_id)])
model_architecture = "PPO"
commit_message = "First PPO LunarLander-v2 trained agent"
# method save, evaluate, generate a model card and record a replay video of your agent before pushing the repo to the hub
package_to_hub(model=model,
model_name=model_name,
model_architecture=model_architecture,
env_id=env_id,
eval_env=eval_env,
repo_id=repo_id,
commit_message=commit_message)
...
```
|
peter198477/girls | peter198477 | "2024-11-25T04:56:22Z" | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2024-11-25T04:55:17Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/799885153188003873.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: KiSS
---
# gls
<Gallery />
## Trigger words
You should use `KiSS` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/peter198477/girls/tree/main) them in the Files & versions tab.
|
Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q6_K-GGUF | Triangle104 | "2025-03-15T13:42:28Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"reasoning",
"r1",
"vllm",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:NousResearch/DeepHermes-3-Llama-3-3B-Preview",
"base_model:quantized:NousResearch/DeepHermes-3-Llama-3-3B-Preview",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-15T13:41:02Z" | ---
base_model: NousResearch/DeepHermes-3-Llama-3-3B-Preview
language:
- en
library_name: transformers
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- reasoning
- r1
- vllm
- llama-cpp
- gguf-my-repo
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: What is the meaning of life?
model-index:
- name: DeepHermes-3-Llama-3.1-3B
results: []
---
# Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q6_K-GGUF
This model was converted to GGUF format from [`NousResearch/DeepHermes-3-Llama-3-3B-Preview`](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-3B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-3B-Preview) for more details on the model.
---
DeepHermes 3 Preview is the latest version of our flagship Hermes
series of LLMs by Nous Research, and one of the first models in the
world to unify Reasoning (long chains of thought that improve answer
accuracy) and normal LLM response modes into one model. We have also
improved LLM annotation, judgement, and function calling.
DeepHermes 3 Preview is a hybrid reasoning model, and one of the
first LLM models to unify both "intuitive", traditional mode responses
and long chain of thought reasoning responses into a single model, toggled by a system prompt.
Hermes 3, the predecessor of DeepHermes 3, is a generalist language
model with many improvements over Hermes 2, including advanced agentic
capabilities, much better roleplaying, reasoning, multi-turn
conversation, long context coherence, and improvements across the board.
The ethos of the Hermes series of models is focused on aligning LLMs
to the user, with powerful steering capabilities and control given to
the end user.
This is a preview Hermes with early reasoning capabilities,
distilled from R1 across a variety of tasks that benefit from reasoning
and objectivity. Some quirks may be discovered! Please let us know any
interesting findings or issues you discover!
Note: To toggle REASONING ON, you must use the following system prompt:
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q6_K-GGUF --hf-file deephermes-3-llama-3-3b-preview-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q6_K-GGUF --hf-file deephermes-3-llama-3-3b-preview-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q6_K-GGUF --hf-file deephermes-3-llama-3-3b-preview-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-3B-Preview-Q6_K-GGUF --hf-file deephermes-3-llama-3-3b-preview-q6_k.gguf -c 2048
```
|
onnx-community/maskformer-resnet50-ade | onnx-community | "2024-10-08T13:54:41Z" | 5 | 0 | transformers.js | [
"transformers.js",
"onnx",
"maskformer",
"image-segmentation",
"base_model:facebook/maskformer-resnet50-ade",
"base_model:quantized:facebook/maskformer-resnet50-ade",
"region:us"
] | image-segmentation | "2024-09-02T13:58:42Z" | ---
base_model: facebook/maskformer-resnet50-ade
library_name: transformers.js
pipeline_tag: image-segmentation
---
https://huggingface.co/facebook/maskformer-resnet50-ade with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Scene segmentation with `onnx-community/maskformer-resnet50-ade`.
```js
import { pipeline } from '@huggingface/transformers';
// Create an image segmentation pipeline
const segmenter = await pipeline('image-segmentation', 'onnx-community/maskformer-resnet50-ade');
// Segment an image
const url = 'https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg';
const output = await segmenter(url);
console.log(output)
// [
// {
// score: 0.9240802526473999,
// label: 'plant',
// mask: RawImage { ... }
// },
// {
// score: 0.967036783695221,
// label: 'house',
// mask: RawImage { ... }
// },
// ...
// }
// ]
```
You can visualize the outputs with:
```js
for (let i = 0; i < output.length; ++i) {
const { mask, label } = output[i];
mask.save(`${label}-${i}.png`);
}
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
dhruvpal/news-classification-model | dhruvpal | "2025-02-06T19:51:42Z" | 16 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-06T19:40:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
uzlnee/mistral-7b-qlora-alpaca-sample_402-0.5k | uzlnee | "2025-03-26T04:15:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-26T04:07:46Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gangu-chettri-kanda-sex-video-telegram/Xvideos.EXCLUSIVE.GANGU.CHETTRI.KANDA.7.2.LINK.ORIGINAL.VIRAL.VIDEO | gangu-chettri-kanda-sex-video-telegram | "2025-04-07T18:14:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-07T18:14:34Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
aimakingg/makan-siosepol | aimakingg | "2025-03-06T21:12:44Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-06T20:52:20Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SIOSEPOLL14
---
# Makan Siosepol
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SIOSEPOLL14` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aimakingg/makan-siosepol', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Caplan/Caplantest | Caplan | "2025-04-09T16:18:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-09T16:18:23Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
OpenVINO/gemma-2-9b-it-int4-ov | OpenVINO | "2024-11-25T04:18:05Z" | 7 | 0 | null | [
"openvino",
"gemma2",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"region:us"
] | null | "2024-10-23T09:51:00Z" | ---
license: gemma
license_link: https://choosealicense.com/licenses/gemma/
base_model: google/gemma-2-9b-it
base_model_relation: quantized
---
# gemma-2-9b-it-int4-ov
* Model creator: [google](https://huggingface.co/google)
* Original model: [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
## Description
This is [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **int4_asym**
* ratio: **1**
* group_size: **128**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2024.5.0 and higher
* Optimum Intel 1.21.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/gemma-2-9b-it-int4-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/gemma-2-9b-it-int4-ov"
model_path = "gemma-2-9b-it-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("What is OpenVINO?", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
## Limitations
Check the original model card for [original model card](https://huggingface.co/google/gemma-2-9b-it) for limitations.
## Legal information
The original model is distributed under [gemma](https://choosealicense.com/licenses/gemma/) license. More details can be found in [original model card](https://huggingface.co/google/gemma-2-9b-it).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
xplainlp/Llama-3.2-1B-Instruct-Explainable-Propaganda-Detection | xplainlp | "2025-03-25T10:16:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-25T09:13:00Z" | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jia-ao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OwOOwO/bomb8 | OwOOwO | "2024-03-31T19:54:28Z" | 91 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-31T19:53:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wendys-llc/autotrain-stfol-259iu | wendys-llc | "2024-03-08T16:16:36Z" | 184 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-stfol-259iu/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-03-08T16:16:26Z" |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-stfol-259iu/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.5476600527763367
f1: 0.8000000000000002
precision: 0.8
recall: 0.8
auc: 0.9600000000000001
accuracy: 0.8
|
nolanaatama/yffmx | nolanaatama | "2023-07-12T08:41:59Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-02-08T02:23:42Z" | ---
license: creativeml-openrail-m
---
|
inarikami/monogptari-6.7b | inarikami | "2022-08-06T20:16:05Z" | 19 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-08-06T02:05:32Z" | ---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# monogptari-6.7b
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on an english monogatari (物語) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7030
- Accuracy: 0.8436
## Quick start
```python
from transformers import pipeline
generator = pipeline('text-generation', model="tensorcat/monogptari-6.7b" , device=0, use_fast=False)
generator("I think its about time I talked about Kiss-Shot", min_length=100, max_length=800,
do_sample=True, early_stopping=True, temperature=.98, top_k=50, top_p=1.0)
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Bojun-Feng/Qwen2.5-Coder-1.5B-Instruct-GGUF-llamafile | Bojun-Feng | "2025-02-25T01:27:12Z" | 0 | 0 | transformers | [
"transformers",
"llamafile",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"en",
"arxiv:2409.12186",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-25T01:18:13Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-1.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64a523ba1ed90082dafde3d3/kJrkxofwOp-89uYFe0EBb.png" alt="LlamaFile" style="width: 50%; min-width: 400px; display: block; margin: auto;">
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
I am not the original creator of llamafile, all credit of llamafile goes to Jartine:
<!-- README_llamafile.md-about-llamafile end -->
<!-- repositories-available start -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/FwAVVu7eJ4">Chat & support: jartine's Discord server</a></p>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">jartine's LLM work is generously supported by a grant from <a href="https://mozilla.org">mozilla</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Qwen2.5 Coder 1.5B Instruct GGUF - llamafile
## Run LLMs locally with a single file - No installation required!
All you need is download a file and run it.
Our goal is to make open source large language models much more
accessible to both developers and end users. We're doing that by
combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one
framework that collapses all the complexity of LLMs down to
a single-file executable (called a "llamafile") that runs
locally on most computers, with no installation.
## How to Use (Modified from [Git README](https://github.com/Mozilla-Ocho/llamafile/tree/8f73d39cf3a767897b8ade6dda45e5744c62356a?tab=readme-ov-file#quickstart))
The easiest way to try it for yourself is to download our example llamafile.
With llamafile, all inference happens locally; no data ever leaves your computer.
1. Download the llamafile.
2. Open your computer's terminal.
3. If you're using macOS, Linux, or BSD, you'll need to grant permission
for your computer to execute this new file. (You only need to do this
once.)
```sh
chmod +x qwen2.5-coder-1.5b-instruct-q8_0.gguf
```
4. If you're on Windows, rename the file by adding ".exe" on the end.
5. Run the llamafile. e.g.:
```sh
./qwen2.5-coder-1.5b-instruct-q8_0.gguf
```
6. Your browser should open automatically and display a chat interface.
(If it doesn't, just open your browser and point it at http://localhost:8080.)
7. When you're done chatting, return to your terminal and hit
`Control-C` to shut down llamafile.
Please note that LlamaFile is still under active development. Some methods may be not be compatible with the most recent documents.
## Settings for Qwen2.5 Coder 1.5B Instruct GGUF Llamafiles
- Model creator: [Qwen](https://huggingface.co/Qwen)
- Quantized GGUF files used: [Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF/tree/f86cb2c1fa58255f8052cc32aeede1b7482d4361)
- Commit message "update README.md"
- Commit hash f86cb2c1fa58255f8052cc32aeede1b7482d4361
- LlamaFile version used: [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile/tree/29b5f27172306da39a9c70fe25173da1b1564f82)
- Commit message "Merge pull request #687 from Xydane/main Add Support for DeepSeek-R1 models"
- Commit hash 29b5f27172306da39a9c70fe25173da1b1564f82
- `.args` content format (example):
```
-m
qwen2.5-coder-1.5b-instruct-q8_0.gguf
...
```
## (Following is original model card for Qwen2.5 Coder 1.5B Instruct GGUF)
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
# Qwen2.5-Coder-1.5B-Instruct-GGUF
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
**This repo contains the instruction-tuned 1.5B Qwen2.5-Coder model in the GGUF Format**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens
- Note: Currently, only vLLM supports YARN for length extrapolating. If you want to process sequences up to 131,072 tokens, please refer to non-GGUF models.
- Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Quickstart
Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`:
1. Install
```shell
pip install -U huggingface_hub
```
2. Download:
```shell
huggingface-cli download Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF qwen2.5-coder-1.5b-instruct-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False
```
For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode:
```shell
./llama-cli -m <gguf-file-path> \
-co -cnv -p "You are Qwen, created by Alibaba Cloud. You are a helpful assistant." \
-fa -ngl 80 -n 512
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
Best000/79e3f53f-80a6-4bf3-b6db-a6161c9e8665 | Best000 | "2025-01-25T16:51:33Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:adapter:elyza/Llama-3-ELYZA-JP-8B",
"license:llama3",
"region:us"
] | null | "2025-01-25T16:45:49Z" | ---
library_name: peft
license: llama3
base_model: elyza/Llama-3-ELYZA-JP-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 79e3f53f-80a6-4bf3-b6db-a6161c9e8665
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: elyza/Llama-3-ELYZA-JP-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6faa97614c440638_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6faa97614c440638_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/79e3f53f-80a6-4bf3-b6db-a6161c9e8665
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6faa97614c440638_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f1e587ed-1a43-4c6a-81c5-bac9a96acad5
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f1e587ed-1a43-4c6a-81c5-bac9a96acad5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 79e3f53f-80a6-4bf3-b6db-a6161c9e8665
This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9868 | 0.0002 | 1 | 2.0999 |
| 2.0852 | 0.0005 | 3 | 2.0893 |
| 1.8017 | 0.0010 | 6 | 1.9914 |
| 2.1757 | 0.0016 | 9 | 1.9401 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/aidmahyperreal | LHRuig | "2025-02-08T13:01:27Z" | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-02-08T13:01:15Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# aidmahyperreal
<Gallery />
## Model description
aidmahyperreal lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aidmahyperreal/tree/main) them in the Files & versions tab.
|
gaianet/Llama-3-8B-Lexi-Uncensored-GGUF | gaianet | "2024-12-12T09:09:47Z" | 23 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-12T09:02:55Z" | ---
license: apache-2.0
---
|
ascari-maximilian/test-tapas | ascari-maximilian | "2024-01-06T15:58:53Z" | 0 | 0 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-01-06T14:39:13Z" | ---
title: tapas test
license: apache-2.0
language:
- en
--- |
vIDEO-Sophie-Rain-Spider-Man-Updates/Sophie-Rain-Spiderman-Video-Tutorial-Viral-Full-Video-Link | vIDEO-Sophie-Rain-Spider-Man-Updates | "2025-02-26T17:12:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-26T17:11:07Z" | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF | tensorblock | "2025-01-07T06:24:14Z" | 28 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B",
"base_model:quantized:Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-07T05:44:16Z" | ---
license: apache-2.0
base_model: Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B - GGUF
This repo contains GGUF format model files for [Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B](https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q2_K.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q4_0.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q5_0.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q6_K.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q8_0.gguf](https://huggingface.co/tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF --include "neural-chat-7b-v3-1-OpenHermes-2.5-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/neural-chat-7b-v3-1-OpenHermes-2.5-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Legalaz/05_llambodot1_02_52 | Legalaz | "2025-01-22T07:55:41Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-22T07:53:23Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9119
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
mradermacher/MN-12B-Celeste-V1.9-i1-GGUF | mradermacher | "2024-08-01T06:55:27Z" | 69 | 3 | transformers | [
"transformers",
"gguf",
"en",
"dataset:nothingiisreal/c2-logs-cleaned",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts",
"base_model:nothingiisreal/MN-12B-Celeste-V1.9",
"base_model:quantized:nothingiisreal/MN-12B-Celeste-V1.9",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-08-01T02:46:41Z" | ---
base_model: nothingiisreal/MN-12B-Celeste-V1.9
datasets:
- nothingiisreal/c2-logs-cleaned
- kalomaze/Opus_Instruct_25k
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Celeste-V1.9-i1-GGUF/resolve/main/MN-12B-Celeste-V1.9.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
farzintava/a2c-PandaReachDense-v3 | farzintava | "2023-11-09T04:30:09Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-09T04:24:07Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
joboffer/8018464b-c631-441d-bffc-6475c80a17aa | joboffer | "2025-01-23T11:09:24Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-23T10:05:17Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8018464b-c631-441d-bffc-6475c80a17aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1beda2e0203b6636_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1beda2e0203b6636_train_data.json
type:
field_input: Description
field_instruction: Patient
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: joboffer/8018464b-c631-441d-bffc-6475c80a17aa
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/1beda2e0203b6636_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cd60c576-2bf0-48eb-8def-949b7ada809b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cd60c576-2bf0-48eb-8def-949b7ada809b
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 8018464b-c631-441d-bffc-6475c80a17aa
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.0594 |
| 2.8427 | 0.0003 | 5 | 2.9909 |
| 2.7062 | 0.0007 | 10 | 2.8806 |
| 2.8286 | 0.0010 | 15 | 2.8268 |
| 2.7586 | 0.0014 | 20 | 2.7870 |
| 2.8216 | 0.0017 | 25 | 2.7677 |
| 2.6967 | 0.0020 | 30 | 2.7644 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/ba21309f-690d-4220-80ac-583c37fca06c | nttx | "2025-01-25T05:25:13Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | "2025-01-25T04:35:25Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ba21309f-690d-4220-80ac-583c37fca06c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 781ead21ad4491a9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/781ead21ad4491a9_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/ba21309f-690d-4220-80ac-583c37fca06c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/781ead21ad4491a9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 401e2ab0-acfc-4fff-9d51-06c7c03df759
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 401e2ab0-acfc-4fff-9d51-06c7c03df759
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ba21309f-690d-4220-80ac-583c37fca06c
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7226 | 0.0002 | 1 | 5.4921 |
| 2.2221 | 0.0100 | 50 | 1.9000 |
| 2.2292 | 0.0199 | 100 | 1.8310 |
| 2.0782 | 0.0299 | 150 | 1.7906 |
| 2.0163 | 0.0398 | 200 | 1.7831 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Venkataramasubramanian/GLChatbot | Venkataramasubramanian | "2024-09-25T14:16:43Z" | 135 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-09-22T22:34:16Z" | ---
tags:
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RIAL-AI/txt2img-tryon-15k | RIAL-AI | "2025-02-14T20:14:08Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | "2025-02-14T20:07:11Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Federic/TestPrompt | Federic | "2024-01-22T10:47:48Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | "2024-01-22T08:39:27Z" | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: TestPrompt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TestPrompt
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
harouzie/bart-base-paws_unlabeled | harouzie | "2023-04-06T09:27:26Z" | 96 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:paws",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-06T06:56:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- paws
model-index:
- name: bart-base-paws_unlabeled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-paws_unlabeled
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the paws dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.2
|
Max200293/wav2vec2-classic-300m-norwegian-colab-hung | Max200293 | "2023-11-28T22:05:33Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-11-28T18:21:56Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- voxpopuli
metrics:
- wer
model-index:
- name: wav2vec2-classic-300m-norwegian-colab-hung
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: voxpopuli
type: voxpopuli
config: fi
split: test
args: fi
metrics:
- name: Wer
type: wer
value: 1.7882131661442007
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-classic-300m-norwegian-colab-hung
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8820
- Wer: 1.7882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7686 | 2.57 | 400 | 2.9953 | 1.0 |
| 2.5005 | 5.14 | 800 | 2.2739 | 1.9808 |
| 1.6554 | 7.72 | 1200 | 2.4720 | 1.6708 |
| 1.1995 | 10.29 | 1600 | 2.2613 | 1.2480 |
| 0.8972 | 12.86 | 2000 | 2.7599 | 1.8873 |
| 0.6962 | 15.43 | 2400 | 3.2783 | 1.9560 |
| 0.5554 | 18.01 | 2800 | 3.2272 | 1.7544 |
| 0.4234 | 20.58 | 3200 | 3.0755 | 1.5645 |
| 0.3341 | 23.15 | 3600 | 3.5022 | 1.7442 |
| 0.2832 | 25.72 | 4000 | 3.7905 | 1.8324 |
| 0.2293 | 28.3 | 4400 | 3.8820 | 1.7882 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
NethamTSG/ppo-LunarLander-v5 | NethamTSG | "2023-08-22T15:09:22Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-22T15:08:58Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.98 +/- 17.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ctu-aic/xlm-roberta-large-nli-csfever | ctu-aic | "2024-03-07T14:57:53Z" | 88 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"cs",
"dataset:ctu-aic/csfever_nli",
"arxiv:2312.10171",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-05T13:32:24Z" | ---
datasets:
- ctu-aic/csfever_nli
language:
- cs
pipeline_tag: text-classification
---
This model is [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) finetuned on [CsFEVER-NLI](https://huggingface.co/datasets/ctu-aic/csfever_nli) dataset.
For more information, see our [Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language](https://arxiv.org/abs/2312.10171) paper.
Currently in review for [NCAA](https://link.springer.com/journal/521) journal.
```bibtex
@article{drchal2023pipeline,
title={Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language},
author={Drchal, Jan and Ullrich, Herbert and Mlyn{\'a}{\v{r}}, Tom{\'a}{\v{s}} and Moravec, V{\'a}clav},
journal={arXiv preprint arXiv:2312.10171},
year={2023}
}
``` |
abhishek/autonlp-imdb-roberta-base-3662644 | abhishek | "2022-02-04T14:25:35Z" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:abhishek/autonlp-data-imdb-roberta-base",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-imdb-roberta-base
co2_eq_emissions: 25.894117734124272
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 3662644
- CO2 Emissions (in grams): 25.894117734124272
## Validation Metrics
- Loss: 0.20277436077594757
- Accuracy: 0.92604
- Precision: 0.9560674830864092
- Recall: 0.89312
- AUC: 0.9814625504000001
- F1: 0.9235223559581421
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb-roberta-base-3662644
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
GaiaFramework/Hotel_Churn_Rate | GaiaFramework | "2025-03-20T13:28:34Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-20T13:24:08Z" | ---
license: apache-2.0
---
|
MayBashendy/ArabicNewSplits5_FineTuningAraBERT_run1_AugV5_k5_task1_organization | MayBashendy | "2024-12-15T18:50:06Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-15T18:43:29Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits5_FineTuningAraBERT_run1_AugV5_k5_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits5_FineTuningAraBERT_run1_AugV5_k5_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8903
- Qwk: 0.5826
- Mse: 0.8903
- Rmse: 0.9436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0606 | 2 | 5.2166 | -0.0138 | 5.2166 | 2.2840 |
| No log | 0.1212 | 4 | 3.2613 | 0.0710 | 3.2613 | 1.8059 |
| No log | 0.1818 | 6 | 2.1905 | 0.0427 | 2.1905 | 1.4800 |
| No log | 0.2424 | 8 | 1.5376 | 0.1504 | 1.5376 | 1.2400 |
| No log | 0.3030 | 10 | 1.4516 | 0.1197 | 1.4516 | 1.2048 |
| No log | 0.3636 | 12 | 1.3179 | 0.2209 | 1.3179 | 1.1480 |
| No log | 0.4242 | 14 | 1.2852 | 0.2549 | 1.2852 | 1.1337 |
| No log | 0.4848 | 16 | 1.2898 | 0.2979 | 1.2898 | 1.1357 |
| No log | 0.5455 | 18 | 1.4806 | 0.1466 | 1.4806 | 1.2168 |
| No log | 0.6061 | 20 | 1.5007 | 0.1215 | 1.5007 | 1.2250 |
| No log | 0.6667 | 22 | 1.5892 | 0.1026 | 1.5892 | 1.2606 |
| No log | 0.7273 | 24 | 1.5446 | 0.1625 | 1.5446 | 1.2428 |
| No log | 0.7879 | 26 | 1.4007 | 0.2686 | 1.4007 | 1.1835 |
| No log | 0.8485 | 28 | 1.4015 | 0.3332 | 1.4015 | 1.1838 |
| No log | 0.9091 | 30 | 1.2115 | 0.3893 | 1.2115 | 1.1007 |
| No log | 0.9697 | 32 | 1.0328 | 0.4178 | 1.0328 | 1.0163 |
| No log | 1.0303 | 34 | 1.0294 | 0.4676 | 1.0294 | 1.0146 |
| No log | 1.0909 | 36 | 1.0846 | 0.4337 | 1.0846 | 1.0414 |
| No log | 1.1515 | 38 | 1.1566 | 0.4586 | 1.1566 | 1.0754 |
| No log | 1.2121 | 40 | 1.2136 | 0.4342 | 1.2136 | 1.1016 |
| No log | 1.2727 | 42 | 1.4292 | 0.3392 | 1.4292 | 1.1955 |
| No log | 1.3333 | 44 | 1.5661 | 0.3208 | 1.5661 | 1.2515 |
| No log | 1.3939 | 46 | 1.6473 | 0.3349 | 1.6473 | 1.2835 |
| No log | 1.4545 | 48 | 1.2493 | 0.3540 | 1.2493 | 1.1177 |
| No log | 1.5152 | 50 | 1.0140 | 0.4423 | 1.0140 | 1.0070 |
| No log | 1.5758 | 52 | 0.9603 | 0.4389 | 0.9603 | 0.9799 |
| No log | 1.6364 | 54 | 0.9540 | 0.4067 | 0.9540 | 0.9767 |
| No log | 1.6970 | 56 | 0.9756 | 0.5233 | 0.9756 | 0.9877 |
| No log | 1.7576 | 58 | 1.0211 | 0.4995 | 1.0211 | 1.0105 |
| No log | 1.8182 | 60 | 1.0593 | 0.4790 | 1.0593 | 1.0292 |
| No log | 1.8788 | 62 | 1.1793 | 0.4374 | 1.1793 | 1.0860 |
| No log | 1.9394 | 64 | 1.1368 | 0.4415 | 1.1368 | 1.0662 |
| No log | 2.0 | 66 | 1.0395 | 0.4694 | 1.0395 | 1.0195 |
| No log | 2.0606 | 68 | 0.9485 | 0.4769 | 0.9485 | 0.9739 |
| No log | 2.1212 | 70 | 0.9566 | 0.4423 | 0.9566 | 0.9781 |
| No log | 2.1818 | 72 | 0.9647 | 0.4440 | 0.9647 | 0.9822 |
| No log | 2.2424 | 74 | 1.0662 | 0.3369 | 1.0662 | 1.0326 |
| No log | 2.3030 | 76 | 1.1820 | 0.3743 | 1.1820 | 1.0872 |
| No log | 2.3636 | 78 | 1.2616 | 0.3285 | 1.2616 | 1.1232 |
| No log | 2.4242 | 80 | 1.2923 | 0.3558 | 1.2923 | 1.1368 |
| No log | 2.4848 | 82 | 1.2058 | 0.4057 | 1.2058 | 1.0981 |
| No log | 2.5455 | 84 | 1.0274 | 0.4245 | 1.0274 | 1.0136 |
| No log | 2.6061 | 86 | 0.9689 | 0.5346 | 0.9689 | 0.9843 |
| No log | 2.6667 | 88 | 0.9716 | 0.5196 | 0.9716 | 0.9857 |
| No log | 2.7273 | 90 | 1.0410 | 0.4567 | 1.0410 | 1.0203 |
| No log | 2.7879 | 92 | 1.0647 | 0.4635 | 1.0647 | 1.0319 |
| No log | 2.8485 | 94 | 1.0745 | 0.4639 | 1.0745 | 1.0366 |
| No log | 2.9091 | 96 | 1.1404 | 0.4390 | 1.1404 | 1.0679 |
| No log | 2.9697 | 98 | 1.1632 | 0.4416 | 1.1632 | 1.0785 |
| No log | 3.0303 | 100 | 1.0348 | 0.4645 | 1.0348 | 1.0173 |
| No log | 3.0909 | 102 | 0.9534 | 0.5369 | 0.9534 | 0.9764 |
| No log | 3.1515 | 104 | 0.8940 | 0.5378 | 0.8940 | 0.9455 |
| No log | 3.2121 | 106 | 0.8629 | 0.6157 | 0.8629 | 0.9289 |
| No log | 3.2727 | 108 | 0.8800 | 0.6285 | 0.8800 | 0.9381 |
| No log | 3.3333 | 110 | 0.9247 | 0.5546 | 0.9247 | 0.9616 |
| No log | 3.3939 | 112 | 0.9527 | 0.5183 | 0.9527 | 0.9761 |
| No log | 3.4545 | 114 | 0.8658 | 0.6149 | 0.8658 | 0.9305 |
| No log | 3.5152 | 116 | 0.7852 | 0.6373 | 0.7852 | 0.8861 |
| No log | 3.5758 | 118 | 0.7767 | 0.6152 | 0.7767 | 0.8813 |
| No log | 3.6364 | 120 | 0.7998 | 0.5696 | 0.7998 | 0.8943 |
| No log | 3.6970 | 122 | 0.7982 | 0.6336 | 0.7982 | 0.8934 |
| No log | 3.7576 | 124 | 0.7999 | 0.6151 | 0.7999 | 0.8944 |
| No log | 3.8182 | 126 | 0.8351 | 0.6350 | 0.8351 | 0.9138 |
| No log | 3.8788 | 128 | 0.8315 | 0.6515 | 0.8315 | 0.9119 |
| No log | 3.9394 | 130 | 0.8047 | 0.6322 | 0.8047 | 0.8970 |
| No log | 4.0 | 132 | 0.7857 | 0.6380 | 0.7857 | 0.8864 |
| No log | 4.0606 | 134 | 0.7715 | 0.6380 | 0.7715 | 0.8784 |
| No log | 4.1212 | 136 | 0.7630 | 0.6332 | 0.7630 | 0.8735 |
| No log | 4.1818 | 138 | 0.7653 | 0.6245 | 0.7653 | 0.8748 |
| No log | 4.2424 | 140 | 0.7700 | 0.6279 | 0.7700 | 0.8775 |
| No log | 4.3030 | 142 | 0.7973 | 0.5969 | 0.7973 | 0.8929 |
| No log | 4.3636 | 144 | 0.8336 | 0.6231 | 0.8336 | 0.9130 |
| No log | 4.4242 | 146 | 0.8464 | 0.6288 | 0.8464 | 0.9200 |
| No log | 4.4848 | 148 | 0.8042 | 0.6322 | 0.8042 | 0.8968 |
| No log | 4.5455 | 150 | 0.7518 | 0.6489 | 0.7518 | 0.8670 |
| No log | 4.6061 | 152 | 0.7487 | 0.6237 | 0.7487 | 0.8653 |
| No log | 4.6667 | 154 | 0.7667 | 0.6337 | 0.7667 | 0.8756 |
| No log | 4.7273 | 156 | 0.7965 | 0.6224 | 0.7965 | 0.8925 |
| No log | 4.7879 | 158 | 0.8421 | 0.5902 | 0.8421 | 0.9177 |
| No log | 4.8485 | 160 | 0.8621 | 0.5880 | 0.8621 | 0.9285 |
| No log | 4.9091 | 162 | 0.8809 | 0.6075 | 0.8809 | 0.9385 |
| No log | 4.9697 | 164 | 0.8805 | 0.5809 | 0.8805 | 0.9384 |
| No log | 5.0303 | 166 | 0.9081 | 0.5806 | 0.9081 | 0.9530 |
| No log | 5.0909 | 168 | 0.9285 | 0.5720 | 0.9285 | 0.9636 |
| No log | 5.1515 | 170 | 0.9290 | 0.5485 | 0.9290 | 0.9639 |
| No log | 5.2121 | 172 | 0.9397 | 0.5282 | 0.9397 | 0.9694 |
| No log | 5.2727 | 174 | 0.9128 | 0.5696 | 0.9128 | 0.9554 |
| No log | 5.3333 | 176 | 0.8574 | 0.5988 | 0.8574 | 0.9260 |
| No log | 5.3939 | 178 | 0.8370 | 0.6127 | 0.8370 | 0.9149 |
| No log | 5.4545 | 180 | 0.7928 | 0.6456 | 0.7928 | 0.8904 |
| No log | 5.5152 | 182 | 0.7858 | 0.5931 | 0.7858 | 0.8864 |
| No log | 5.5758 | 184 | 0.7816 | 0.5955 | 0.7816 | 0.8841 |
| No log | 5.6364 | 186 | 0.8038 | 0.6442 | 0.8038 | 0.8965 |
| No log | 5.6970 | 188 | 0.8496 | 0.5883 | 0.8496 | 0.9217 |
| No log | 5.7576 | 190 | 0.9202 | 0.5951 | 0.9202 | 0.9593 |
| No log | 5.8182 | 192 | 0.9820 | 0.5066 | 0.9820 | 0.9910 |
| No log | 5.8788 | 194 | 0.9991 | 0.5111 | 0.9991 | 0.9996 |
| No log | 5.9394 | 196 | 0.9572 | 0.5160 | 0.9572 | 0.9784 |
| No log | 6.0 | 198 | 0.8777 | 0.5967 | 0.8777 | 0.9369 |
| No log | 6.0606 | 200 | 0.8088 | 0.6030 | 0.8088 | 0.8993 |
| No log | 6.1212 | 202 | 0.7870 | 0.6039 | 0.7870 | 0.8871 |
| No log | 6.1818 | 204 | 0.7963 | 0.6254 | 0.7963 | 0.8924 |
| No log | 6.2424 | 206 | 0.8153 | 0.6332 | 0.8153 | 0.9029 |
| No log | 6.3030 | 208 | 0.7997 | 0.6388 | 0.7997 | 0.8943 |
| No log | 6.3636 | 210 | 0.8114 | 0.6147 | 0.8114 | 0.9008 |
| No log | 6.4242 | 212 | 0.8251 | 0.5942 | 0.8251 | 0.9084 |
| No log | 6.4848 | 214 | 0.8619 | 0.5667 | 0.8619 | 0.9284 |
| No log | 6.5455 | 216 | 0.9152 | 0.5836 | 0.9152 | 0.9567 |
| No log | 6.6061 | 218 | 0.9890 | 0.5247 | 0.9890 | 0.9945 |
| No log | 6.6667 | 220 | 1.0010 | 0.5280 | 1.0010 | 1.0005 |
| No log | 6.7273 | 222 | 0.9653 | 0.5627 | 0.9653 | 0.9825 |
| No log | 6.7879 | 224 | 0.9027 | 0.5804 | 0.9027 | 0.9501 |
| No log | 6.8485 | 226 | 0.8508 | 0.6195 | 0.8508 | 0.9224 |
| No log | 6.9091 | 228 | 0.8305 | 0.6074 | 0.8305 | 0.9113 |
| No log | 6.9697 | 230 | 0.8323 | 0.6079 | 0.8323 | 0.9123 |
| No log | 7.0303 | 232 | 0.8477 | 0.6200 | 0.8477 | 0.9207 |
| No log | 7.0909 | 234 | 0.8737 | 0.6041 | 0.8737 | 0.9347 |
| No log | 7.1515 | 236 | 0.8903 | 0.5974 | 0.8903 | 0.9436 |
| No log | 7.2121 | 238 | 0.8927 | 0.5890 | 0.8927 | 0.9448 |
| No log | 7.2727 | 240 | 0.9066 | 0.5863 | 0.9066 | 0.9521 |
| No log | 7.3333 | 242 | 0.9005 | 0.5913 | 0.9005 | 0.9490 |
| No log | 7.3939 | 244 | 0.8800 | 0.5945 | 0.8800 | 0.9381 |
| No log | 7.4545 | 246 | 0.8444 | 0.6179 | 0.8444 | 0.9189 |
| No log | 7.5152 | 248 | 0.8219 | 0.6308 | 0.8219 | 0.9066 |
| No log | 7.5758 | 250 | 0.8168 | 0.6022 | 0.8168 | 0.9038 |
| No log | 7.6364 | 252 | 0.8206 | 0.6229 | 0.8206 | 0.9059 |
| No log | 7.6970 | 254 | 0.8314 | 0.6484 | 0.8314 | 0.9118 |
| No log | 7.7576 | 256 | 0.8395 | 0.6484 | 0.8395 | 0.9162 |
| No log | 7.8182 | 258 | 0.8446 | 0.6484 | 0.8446 | 0.9190 |
| No log | 7.8788 | 260 | 0.8588 | 0.6215 | 0.8588 | 0.9267 |
| No log | 7.9394 | 262 | 0.8769 | 0.5807 | 0.8769 | 0.9364 |
| No log | 8.0 | 264 | 0.9094 | 0.6033 | 0.9094 | 0.9536 |
| No log | 8.0606 | 266 | 0.9459 | 0.5567 | 0.9459 | 0.9726 |
| No log | 8.1212 | 268 | 0.9566 | 0.5531 | 0.9566 | 0.9780 |
| No log | 8.1818 | 270 | 0.9505 | 0.5420 | 0.9505 | 0.9750 |
| No log | 8.2424 | 272 | 0.9352 | 0.5691 | 0.9352 | 0.9670 |
| No log | 8.3030 | 274 | 0.9254 | 0.5633 | 0.9254 | 0.9620 |
| No log | 8.3636 | 276 | 0.9189 | 0.5941 | 0.9189 | 0.9586 |
| No log | 8.4242 | 278 | 0.9245 | 0.5795 | 0.9245 | 0.9615 |
| No log | 8.4848 | 280 | 0.9227 | 0.5857 | 0.9227 | 0.9606 |
| No log | 8.5455 | 282 | 0.9098 | 0.5687 | 0.9098 | 0.9538 |
| No log | 8.6061 | 284 | 0.8887 | 0.5922 | 0.8887 | 0.9427 |
| No log | 8.6667 | 286 | 0.8659 | 0.5991 | 0.8659 | 0.9305 |
| No log | 8.7273 | 288 | 0.8474 | 0.6197 | 0.8474 | 0.9206 |
| No log | 8.7879 | 290 | 0.8287 | 0.6026 | 0.8287 | 0.9103 |
| No log | 8.8485 | 292 | 0.8181 | 0.6055 | 0.8181 | 0.9045 |
| No log | 8.9091 | 294 | 0.8169 | 0.6118 | 0.8169 | 0.9039 |
| No log | 8.9697 | 296 | 0.8228 | 0.6104 | 0.8228 | 0.9071 |
| No log | 9.0303 | 298 | 0.8322 | 0.6146 | 0.8322 | 0.9123 |
| No log | 9.0909 | 300 | 0.8420 | 0.6221 | 0.8420 | 0.9176 |
| No log | 9.1515 | 302 | 0.8470 | 0.6221 | 0.8470 | 0.9203 |
| No log | 9.2121 | 304 | 0.8544 | 0.6221 | 0.8544 | 0.9243 |
| No log | 9.2727 | 306 | 0.8534 | 0.6152 | 0.8534 | 0.9238 |
| No log | 9.3333 | 308 | 0.8538 | 0.6081 | 0.8538 | 0.9240 |
| No log | 9.3939 | 310 | 0.8575 | 0.6081 | 0.8575 | 0.9260 |
| No log | 9.4545 | 312 | 0.8677 | 0.5972 | 0.8677 | 0.9315 |
| No log | 9.5152 | 314 | 0.8739 | 0.5900 | 0.8739 | 0.9348 |
| No log | 9.5758 | 316 | 0.8739 | 0.5900 | 0.8739 | 0.9348 |
| No log | 9.6364 | 318 | 0.8762 | 0.5729 | 0.8762 | 0.9361 |
| No log | 9.6970 | 320 | 0.8788 | 0.5909 | 0.8788 | 0.9374 |
| No log | 9.7576 | 322 | 0.8816 | 0.5826 | 0.8816 | 0.9389 |
| No log | 9.8182 | 324 | 0.8855 | 0.5826 | 0.8855 | 0.9410 |
| No log | 9.8788 | 326 | 0.8881 | 0.5826 | 0.8881 | 0.9424 |
| No log | 9.9394 | 328 | 0.8897 | 0.5826 | 0.8897 | 0.9433 |
| No log | 10.0 | 330 | 0.8903 | 0.5826 | 0.8903 | 0.9436 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
lesso05/2b4e2518-0258-44bb-9aa9-c3edddb615cf | lesso05 | "2025-03-25T00:35:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | "2025-03-25T00:24:34Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b4e2518-0258-44bb-9aa9-c3edddb615cf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b96898d6c468aa5f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b96898d6c468aa5f_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso05/2b4e2518-0258-44bb-9aa9-c3edddb615cf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000205
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/b96898d6c468aa5f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 50
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: db4f829f-4443-4b54-86bb-2fe3b11767c7
wandb_project: 05a
wandb_run: your_name
wandb_runid: db4f829f-4443-4b54-86bb-2fe3b11767c7
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2b4e2518-0258-44bb-9aa9-c3edddb615cf
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000205
- train_batch_size: 4
- eval_batch_size: 4
- seed: 50
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 3.6508 |
| 17.7661 | 0.1000 | 500 | 2.1289 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
unslothai/studio | unslothai | "2024-07-07T16:53:17Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-07-07T16:53:12Z" | ---
library_name: transformers
tags: []
---
|
huggingtweets/nickjr | huggingtweets | "2022-08-15T23:12:34Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-08-15T23:12:08Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/nickjr/1660605150021/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1478805340212838413/YAJM_fei_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nick Jr.</div>
<div style="text-align: center; font-size: 14px;">@nickjr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nick Jr..
| Data | Nick Jr. |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 52 |
| Short tweets | 751 |
| Tweets kept | 2447 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2dd9rqp5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nickjr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1bjl6cbb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1bjl6cbb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nickjr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sandspeare/llasm-decoder | sandspeare | "2024-04-01T01:50:55Z" | 1 | 1 | transformers | [
"transformers",
"llava",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-01T01:22:04Z" | ---
license: mit
---
<h1 align="center">llasm: Naming Functions in Binaries by Fusing Encoder-only and Decoder-only LLMs</h1>
## About
llasm, is a novel framework that fuses encoder-only and decoder-only LLMs, which utilizes their capabilities to better comprehend assembly language and have better generalizability at function naming.
This is the decoder of llasm. The upload model is a lora adapter, the base model is Vicuna-13B. |
CaptainPollutionTV/CaptainPollution-OJ4 | CaptainPollutionTV | "2024-03-16T16:45:52Z" | 0 | 0 | null | [
"DreamBooth",
"OpenJourney4",
"license:cc",
"region:us"
] | null | "2024-03-10T10:54:31Z" | ---
license: cc
tags:
- DreamBooth
- OpenJourney4
---
Made by CaptainPollutionTV using the getimg.ai Dreambooth tool.
Details about the model:
Base Model
Openjourney v4
Instance prompt
captainpollution
Class prompt
a man
Learning Rate
0.000001
Learning Rate Scheduler
polynomial
Training Steps
10000 (200 steps warmup)
Class images
1000
Model seed
767052756
Sample images:











































































 |
LHRuig/clifffsx | LHRuig | "2025-03-25T18:27:24Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-25T18:27:06Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: clifffsx
---
# clifffsx
<Gallery />
## Model description
clifffsx lora
## Trigger words
You should use `clifffsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/clifffsx/tree/main) them in the Files & versions tab.
|
hrishikeshpai30/wavlm-libri-clean-100h-large | hrishikeshpai30 | "2023-05-15T01:38:59Z" | 114 | 1 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"ahazeemi/librispeech10h",
"generated_from_trainer",
"en",
"dataset:ahazeemi/librispeech10h",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-04-12T12:48:38Z" | ---
tags:
- automatic-speech-recognition
- ahazeemi/librispeech10h
- generated_from_trainer
metrics:
- wer
model-index:
- name: wavlm-libri-clean-100h-large
results: []
datasets:
- ahazeemi/librispeech10h
language:
- en
pipeline_tag: automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-libri-clean-100h-large
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the AHAZEEMI/LIBRISPEECH10H - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0893
- Wer: 0.0655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0144 | 0.42 | 300 | 0.0947 | 0.0749 |
| 0.1408 | 0.84 | 600 | 0.1347 | 0.1363 |
| 0.0396 | 1.26 | 900 | 0.1090 | 0.0935 |
| 0.0353 | 1.68 | 1200 | 0.1032 | 0.0832 |
| 0.051 | 2.1 | 1500 | 0.0969 | 0.0774 |
| 0.0254 | 2.52 | 1800 | 0.0930 | 0.0715 |
| 0.0579 | 2.94 | 2100 | 0.0894 | 0.0660 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0+cpu
- Datasets 2.9.0
- Tokenizers 0.13.2 |
Arczisan/japanese-policeuniform | Arczisan | "2024-02-20T21:55:09Z" | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] | text-to-image | "2024-02-20T21:55:04Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0l\0t\0r\0a\0-\0d\0e\0t\0a\0i\0l\0e\0d\0,\0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0,\0r\0e\0a\0l\0i\0s\0t\0i\0c\0,\0p\0h\0o\0t\0o\0r\0e\0a\0l\0i\0s\0t\0i\0c\0,\0"
output:
url: >-
images/106227-3617674872-ltra-detailed,highly detailed,best
quality,masterpiece,illustration,realistic,photorealistic,_1girl, solo,
japanese police unifo.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Japanese Police Uniform
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Arczisan/japanese-policeuniform/tree/main) them in the Files & versions tab.
|
adami1/405M_TIES-merge_pile_300B_into_slimp_300B_from_pile_replay5_density-0.95 | adami1 | "2024-03-13T17:13:54Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"btherien/JOB-3150994_410M_it-132366_tr-pile-train_scratch",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-13T17:13:31Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- btherien/JOB-3150994_410M_it-132366_tr-pile-train_scratch
License: apache-2.0
---
# 405M_TIES-merge_pile_300B_into_slimp_300B_from_pile_replay5_density-0.95
405M_TIES-merge_pile_300B_into_slimp_300B_from_pile_replay5_density-0.95 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [btherien/JOB-3150994_410M_it-132366_tr-pile-train_scratch](https://huggingface.co/btherien/JOB-3150994_410M_it-132366_tr-pile-train_scratch)
## 🧩 Configuration
\```yamlmodels:
- model: btherien/Model_-410M_It_-132366_Tr_-slim-pajama-300B-replay5_finetune
# no parameters necessary for base model
- model: btherien/JOB-3150994_410M_it-132366_tr-pile-train_scratch
parameters:
density: 0.95
weight: 1.0
merge_method: ties
base_model: btherien/Model_-410M_It_-132366_Tr_-slim-pajama-300B-replay5_finetune
parameters:
normalize: true
dtype: float16\``` |
od2025/rho | od2025 | "2025-03-11T13:03:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2025-03-11T13:01:21Z" | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
mmtg/wav2vec2-xlsr-cv-16-1 | mmtg | "2024-06-09T20:34:34Z" | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-09T16:01:16Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-cv-16-1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: nan-tw
split: test
args: nan-tw
metrics:
- name: Wer
type: wer
value: 1.0781078107810782
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-cv-16-1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6173
- Wer: 1.0781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9349 | 16.0 | 400 | 2.6259 | 1.0055 |
| 0.9973 | 32.0 | 800 | 2.6173 | 1.0781 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
NexesQuants/Llama_3.x_70b_SmarTricks_0.41_R1-iMat-CQ-GGUF | NexesQuants | "2025-03-30T16:32:12Z" | 0 | 0 | null | [
"gguf",
"base_model:Nexesenex/Llama_3.x_70b_SmarTricks_0.41_R1",
"base_model:quantized:Nexesenex/Llama_3.x_70b_SmarTricks_0.41_R1",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-30T15:38:37Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
DevQuasar/mistralai.Mixtral-8x7B-v0.1-GGUF | DevQuasar | "2025-03-05T19:55:37Z" | 14 | 0 | null | [
"gguf",
"text-generation",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:quantized:mistralai/Mixtral-8x7B-v0.1",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-04T02:57:40Z" | ---
base_model:
- mistralai/Mixtral-8x7B-v0.1
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
mhla/prO-1 | mhla | "2025-03-03T06:50:54Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2025-01-30T01:10:19Z" | ---
license: apache-2.0
---
|
jonatasgrosman/exp_w2v2t_ar_wav2vec2_s364 | jonatasgrosman | "2022-07-10T15:30:16Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-10T15:29:50Z" | ---
language:
- ar
license: apache-2.0
tags:
- automatic-speech-recognition
- ar
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ar_wav2vec2_s364
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
kostiantynk-out/4056a69b-f256-40fc-998c-bc8ccb6ab4c3 | kostiantynk-out | "2025-02-17T12:48:20Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"region:us"
] | null | "2025-02-17T08:06:22Z" | ---
library_name: peft
base_model: oopsung/llama2-7b-koNqa-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4056a69b-f256-40fc-998c-bc8ccb6ab4c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4056a69b-f256-40fc-998c-bc8ccb6ab4c3
This model is a fine-tuned version of [oopsung/llama2-7b-koNqa-test-v1](https://huggingface.co/oopsung/llama2-7b-koNqa-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
seongil-dn/bge-m3-kor-retrieval-451949-bs64-book-50 | seongil-dn | "2024-12-14T09:15:59Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:451949",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-12-14T09:14:36Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:451949
- loss:CachedMultipleNegativesRankingLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: 일본 재무성은 배우자의 연간 수입 상한액에 대해 얼마와 130만 엔 안을 제시했어?
sentences:
- 일본 정부는 저출산 대책을 강화할 재원 확보를 위해 기업의 육아지원 출연금을 증액하도록 경제계에 요구할 방침이다. 만약 이 방침이 실현되면
기업의 부담금은 연 최대 1,000억 엔 규모로 확대되고, 확대된 재원은 맞벌이 가구나 다자녀 가구의 육아지원에 사용될 계획이다. 이번 조치는
아베 신조 총리가 주도하는 ‘1억 총 활약 사회’ 실현을 위한 핵심 정책으로 활용될 계획이지만 경제계의 반발도 고려하지 않을 수 없는 상황이다.
경단련과 경제동우회에는 이미 정부 방침이 전달되었는데, 아베 총리는 2015년 9월 말에 발표한 아베노믹스의 2단계 방편인 ‘새로운 세 개의
화살’에서 현재 출산율인 1.4를 2020년대 중반까지 1.8로 상향시킨다는 목표를 밝힌 바 있다. 일본 정부가 기업에 요구하는 것은 연금특별회계의
아동 및 육아지원 계정에 대한 출연금 증액인데, 정부 안에 따르면 현재 월급과 상여금의 0.15%인 기업출연금은 2016년부터는 0.20%로
인상될 전망이다.
- 일본 재무성은 지금까지 배우자의 연간수입 상한액에 대해서 ‘150만 엔 안’과 ‘130만 엔 안’의 두 가지 안을 제시하였는데, 자민당의 세제조사회에서는
‘150만 엔 안’이 효과가 높을 것이라는 의견이 대다수를 차지했다. ‘130만 엔 안’의 경우 배우자의 연간수입이 130만 엔을 넘으면 연금과
의료보험의 사회보험료 부담이 발생하는 ‘130만 엔의 벽’과 중복되어, 수입을 그 이하로 줄이기 위해 근무시간을 줄일 가능성이 높아질 것으로
판단하였다. 자민당의 세제조사회의 노다 최고 고문은 23일 BS후지방송에 방송된 프로그램에서 소득세가 공제되는 배우자의 연간수입 상한액을 150만
엔으로 인상하는 것이 바람직하다는 입장을 표명하였다. 공명당 간부도 같은 날 ‘150만 엔 안’으로 인상하는 것을 우선적으로 검토하고 있다고
밝혔다. 일본 재무성은 소득세가 공제되는 배우자의 연간수입 상한액을 150만 엔으로 인상할 경우, 360만 가구가 감세 혜택을 받게 되는 데에
비해, 연간수입 상한액을 130만 엔으로 인상할 경우 감세 혜택을 받는 가구는 260만 가구에 머물 것으로 추계하였다.
- 지방자치단체의 행정에 인권개념을 도입하기 위해서는 우선 지속가능한 제도를 구축하는 것이 매우 중요하다. 제도에는 조례, 인력 또는 조직 등이
포함된다. 지방자치단체 인권제도의 도입은 2002년 울산광역시에서 ‘인권교육 및 인권보호활동 추진에 관한 조례’ 제정운동을 시작으로 지방자치단체
인권조례 운동이 모색되기 시작하였으며 2007년에는 경남 진주에서도 학계 연구자들과 시민단체 활동가들이 인권조례 제정활동을 벌이기 시작했다.
두 번의 실패 끝에 결국 2009년 5월 광주광역시에서 전국 최초로 ‘광주광역시 민주・인권・평화도시 육성 조례’를 제정하면서 인권조례 제정활동이
본격화된다. 2012년 국가인권위원회는 지역사회에서의 인권 보장 및 증진을 위하여 각 지자체의 장에게 인권 기본조례의 제・개정을 권고하며 인권제도의
도입을 급격히 확산시키는 견인차 역할을 담당한다. 2019년 현재 총 104곳의 지방자치단체(광역자치단체 17곳, 기초자치단체 87곳)에서
제정되었다.
- source_sentence: 경영방침을 자긍심을 심는 콘텐츠의 제작으로 정하여 실행해 나가는 방송사는 어디니?
sentences:
- 여기서 ‘사생활의 비밀’이란 사생활과 관련된 사사로운 자신만의 영역이 사회공동체의 일반적인 생활규범의 범위 내에서 본인의 의사에 반해서 타인에게
알려지지 않도록 할 수 있는 권리를 말한다. 구체적으로는 (i) 본인의 의사에 반하여 감시, 도청, 비밀녹음, 비밀촬영 등에 의하여 사생활의
비밀을 탐지하거나 사생활의 평온을 침입하여서는 아니 된다는 것, (ii) 사적 사항의 공개는 개인의 자율에 일임되어야 하며, 난처한 사사(私事)를
무단으로 공개하여서는 아니 된다는 것, (iii) 허위의 사실을 공표하거나 사실을 과장 왜곡되게 공표하여 특정인을 진실과 다르게 인식하도록
하여서는 아니 된다는 것, (iv) 성명, 초상, 경력 등이 사실과 일치하더라도 영리의 목적으로 사용하여서는 아니 된다는 것 등을 그 내용으로
한다. 또 ‘사생활의 자유’란 사생활을 자유롭게 형성해 나가고, 그 설계 및 내용에 대해서 외부로부터의 간섭을 받지 않을 권리를 말한다. 이에는
대체로 결혼, 피임, 낙태, 자녀의 양육, 교육, 성생활, 두발, 의복형태, 취미생활 등의 자유가 포함된다.
- 제가 이 자리에서 여러 번 강조합니다만 방송의 품질을 높이고 품격 있는 방송을 하도록 우리의 정책의지가 담겨 있어야 한다고 봅니다. 그래서
가뜩이나 광고시장이 위축되고 있기 때문에 모든 방송사들이 시청률에 매달릴 수밖에 없는 실정입니다. 그러면 시청률은 그저 이목을 끌고 검증되지
않는 자극적인 언사를 쓰는 방송프로그램에 더 시청률이 몰릴 수밖에 없습니다. 그런 유혹을 방송들이 철저하게 절제를 하면서 방송의 품격을 지켜
나갈 수 있도록 우리가 그렇게 유도해야 하는 것입니다. 특히 출연진을 잘 검증하는 장치가 과연 방송사에서 자율적으로 잘 마련되어 있고, 또
그것이 잘 이루어지고 있는지를 철저하게 점검하는 부분들을 반드시 방송사들이, 사업자들이 깨닫고 자정하는 노력이 있어야 할 것으로 봅니다. 그래서
그런 부분에 대한 우리의 정책의지가 발휘될 수 있도록 다시 한 번 주문합니다. 이상입니다.
- 하지만 공정성 과 객관성 확보와 오보·막말 방지에 대한 우리 채널A의 의지는 그 어느 때보다 확고합니다. 지난해부터 그런 것들에 대해서 저뿐만
아니라 많은 조직원들이 좀 더 강하게 문제제기를 하고 있고 고쳐 나가고 노력하고 있고, 그래서 제도적 완비에도 최선을 다하려고 노력하고 있습니다.
채널A는 매년 3가지 경영방침을 정해서 이를 우선적으로 실천해 나가고 있습니다. 지난해 3대 경영방침 중 첫 번째가 퀄리티 저널리즘의 구현이었습니다.
그리고 또 올해에는 역시 첫 번째가 채널A의 자긍심을 심는 콘텐츠를 만들자는 의미로 A 프라이드 콘텐츠의 확산을 우리 3대 경영방침으로 삼고
있습니다. 또 새롭게 설정한 채널A의 4대 비전 가운데에서 제일 첫 번째가 품격을 담는 채널A이고 두 번째가 공정하고 건전한 여론을 담는 채널A입니다.
이 모든 것들이 우리 채널A의 콘텐츠의 공정성과 객관성을 최대한 담고 오보와 막말을 모두 덜어내자는 의지의 표현이고 또 반드시 실천해 나가야
되는 채널A의 숙제이자 목표입니다. 제도적으로도 보완과 개선을 계속 해 나가고 있습니다.
- source_sentence: 1999년에 구축한 국방조달관리정보체계를 토대로 하여 중앙조달 전자입찰체계를 개발운용하고 있는 기관은 어디야?
sentences:
- 국방부조달본부는 1995년‘전자거래 시범기관’으로 지정된 이후, 1999년 국방조달관리정보체계(DPAMIS)를 구축하고 이를 기반으로 중앙조달
전자입찰체계를 개발운용하고 있으며, 부대조달을 포함한 전군 단일 전자입찰체계를 개발중에 있다. 국방조달행정의 편의성, 투명성 및 대민 서비스
개선요구가 증대되는 등 전자상거래의 필요성이 제기됨에 따라 2000년 11월 중앙조달 전자입찰체계를 구축완료하고, 2001년 4월부터 소량·소액
품목을 대상으로 부분적으로 전자입찰을 실시하였으며, 2002년부터는 비밀사업과 다자간 협상사업 및 법적으로 전자상거래가 제한되는 외자분야를
제외한 전 품목을 대상으로 전자입찰을 시행하고 있다. 또한, 2002년부터는 2003년도 국방조달분야 전자입찰의 전면시행을 목표로 중앙조달
전자입찰체계 확대·보완사업을 추진하고 있는 바, 이에는 부대조달 전자입찰체계 개발을 비롯하여 조달원 통합관리, 원가자료 획득 및 산정기능,
제증명 신청 및 발급 등 민원 서비스체계가 포함되어 있다.
- 조달청은 정부ㆍ공공기관에서 필요한 물자와 용역 등을 제때 적정한 가격으로 구매ㆍ공급할 수 있게 하는 국가종합전자조달시스템(나라장터, www.g2b.go.kr)을
구축ㆍ운영하고 있다. 이 서비스로 수요기관ㆍ조달업체 등록, 입찰, 계약, 검사, 대금 지급 등 정부ㆍ공공조달 전 과정을 인터넷으로 처리하고
확인할 수 있다. 국가종합전자조달 서비스로 입찰, 계약, 지급 등 조달 업무 전 과정에 대한 온라인 처리, 진행 상황의 실시간 모니터링이 가능해졌으며,
2003년 서비스 개시 이후 전자입찰을 통한 거래 실적이 매년 증가하는 추세다. 2017년에는 국가종합조달서비스의 안정적인 운영과 전문성을
확보하기 위한 전자조달센터를 지정해RFID 등 8개 시스템의 운영ㆍ유지보수 사업에 대한 전자조달지원센터 지정과 이관을 추진했다. 조달통계에
관한 빅데이터 분석 시스템을 구축해 공공조달업무 효율화를 지원하고, 향상된 보안성으로 빠른 실행을 지원하는 안전입찰 2.0을 도입함으로써 이용자
만족도 및 보안성을 높이고 있다.
- 북한 핵전략에 대한 연구는 어떤 효과를 갖는가. 우선 북한의 핵전략을 파악함으로써 북한의 핵위협에 대해 보다 효과적인 군사적 대응이 가능하게
된다. 현재 우리는 북한의 핵전략에 대해 지극히 초보적인 지식만을 갖고 있으며, 따라서 이에 대한 대응책도 유효하거나 충분치 않을 가능성이
높다. 북한의 핵전략을 파악한다는 것은 북한이 핵무기의 수량을 얼마나 증대할 것인지, 핵무기의 종류와 핵무기를 어떤 상황에서 사용할 것인지,
핵무기를 어떤 용도로 사용할 것인지를 이해하는 것이다. 이렇게 북한의 핵전략을 이해할 때, 북한의 핵사용 또는 핵사용 위협을 성공적으로 억제할
가능성도 높아질 것이다. 또한 북한의 핵전략에 대한 이해는 우리의 대북정책 또는 북핵정책에 큰 영향을 미칠 것이다. 사실 현재 북핵에 대한
국내의 논의는 대부분 북핵을 어떻게 정치‧외교적으로 제거할 것인지에 대한 비핵화문제에 집중된다. 학계에서 북한의 핵무기 사용과 사용위협에 대한
군사안보적 대응에 대한 연구와 논의는 거의 전무하거나, 매우 초보적인 단계에 머물고 있다고 해도 과언이 아니다.
- source_sentence: 1960년부터 1970년대 사회주의권은 물론 비사회주의권의 개발도상국을 지원하며 제3세계 리더 역할을 한 국가는
어디니?
sentences:
- 1974년 포르투갈에서부터 시작한 민주화의 제3의 물결은 남유럽과 중남미를 거쳐 아시아, 동유럽, 아프리카 등으로 20여 년 동안 확산되었다.
1980년대 말 냉전의 해체는 이러한 민주화의 물결이 붕괴한 사회주의 국가들에게도 영향을 미쳐 자본주의를 기반으로 한 민주주의와 경쟁할 정치체제는
역사상 더 이상 존재하지 않을 것임을 선포하게 했다. 하지만 새로운 세기에 접어들어 모두를 의아하게 만든 현실은 여전히 지금도 전 세계 절반
이상의 국가들이 민주주의가 아닌 권위주의 체제를 유지하고 있는 것이었다. 권위주의 체제의 붕괴는 당연히 민주주의 체제의 수립으로 이어질 것이라는
낙관적 사고에 커다란 의구심을 던지게 만든 현실이자, 기존 권위주의 체제가 붕괴하고 새로이 등장하는 체제가 또 다른 유형의 권위주의일 수 있음을
깨닫게 해준 현실이었다. 대표적으로 사회주의권 붕괴 이후 동유럽에 등장한 정치체제의 다수는 구 공산당 간부들에 의해 지배되는 새로운 유형의
권위주의 체제이거나 벨라루스, 우즈베키스탄, 아제르바이잔처럼 사회주의 국가 시절보다 더 폭압적인 독재체제였다.
- 정부는 성장동력 확충과 사회문제 해결에 필요한 국가 전략기술 분야를 집중적으로 지원하기 위해 「국가전략프로젝트」 사업을 신규로 추진할 계획이다.
동 사업은 「성장동력 분야」와 「삶의 질 및 국민행복 분야」의 9개 프로젝트로 구성된다. 성장동력 분야는 자율주행차 ․ 스마트시티 ․ 가상증강현실
․ 경량소재 ․ 인공지능 등 5개 프로젝트가, 삶의 질 및 국민행복 분야는 미세먼지 ․ 탄소자원화 ․ 정밀의료 ․ 바이오 신약 등 4개 프로젝트가
포함된다. 미래창조과학부는 국가전략프로젝트 사업의 총사업비를 약 1조 6,000억원으로 예상하고 있다. 2017년 예산안은 300억원이며,
프로젝트별 예산은 7개 부처의 예산안에 편성되어 있다. 9개 프로젝트 중 예비타당성조사가 진행 중인 5개 프로젝트의 예산은 세부시행계획 수립비용으로
편성하였다.
- '1960~70년대 중국은 제3세계의 리더로서 특히 아프리카 신생독립국을 포함한 사회주의권은 물론 비사회주의권 개발도상국을 지원했다. 1960년
최초로 기니에 무이자 차관을 제공했으며 1960년대 후반 탄자니아와 잠비아를 연결하는 철로를 건설하는 등 제3세계 원조를 위한 물자와 인력을
제공했다, 쿠웨이트, 사우디아라비아, 아랍에미리트 등의 중동 이슬람 국가들은 1970년대 이후부터 중동 국가 결속을 위한 지역 차원의 지원을
시작했다. 쿠웨이트, 사우디아라비아, 아랍에미리트 등의 중동 이슬람 국가들은 1970년대 이후부터 중동 국가 결속을 위한 지역 차원의 지원을
시작했다. 1961년 쿠웨이트는 아랍경제개발펀드(The Kuwait Fund for Arab Economic Development)를 설립했으며,
1970년 중반 이후 이슬람개발은행(IsDB: Islamic Development Bank)과 아랍경제개발은행(BADEA: Arab Bank
for Economic Development in Africa) 등을 운영했다.'
- source_sentence: 실제적 발달 수준과 잠재적 발단 수준 사이를 역동적인 공간으로 이야기하는 영역은 뭐야?
sentences:
- 세 번째는 비공식적 및 공식적 지원 관점으로 아동기를 역동적인 관계의 복합체로 인식하며, 역동적인 상호관계는 만족스럽고 성공적인 아동기에 필수요소이다.
이러한 상호관계의 범위는 아동 양육과 보호의 주 제공자인 부모에서부터 아동 권리를 최종적으로 보장하는 역할을 하는 국가에까지 이른다. 아동에게
필수적인 지원과 서비스는 가족의 사회 관계망 및 가족과 지역사회를 통한 비공식적 지원이나 제 3섹터 및 영리 부문 및 국가와 기관들을 통한
공식적 지원으로 전달된다. 비공식적 및 공식적 지원은 아동이 필요로 하고 혜택을 받을 수 있는 지원과 서비스를 가능하게 하는 전달자 역할을
한다. 이러한 ‘사회적 자본’을 지원하는 것이 국가 아동 전략의 핵심 주제이다. 이렇게 다양하고 서로 상호작용하는 지원의 원천으로부터 아동은
앞서 말한 9개의 발달 영역에서 성장하기 위한 도움을 받는다. 모든 아동은 좋은 교육과 양질의 의료 서비스에 대한 접근권 등 기본적인 지원과
서비스를 필요로 한다. 일부 아동은 빈곤이나 장애, 소수 인종 및 문화 집단, 양육과 보호의 필요성, 비행 및 자해 행동 등을 이유로 추가적인
지원과 서비스를 필요로 한다.
- '하브루타에 임하는 학생들의 태도는 다양하다. 기본적인 학습에 대한 참여율에 따라 상당한 차이를 보인다. 앞에서 언급한 인재시교에 다다를 때까지
기다려주고 관심가져주며, 칭찬과 극려의 말로 지지 할 수 있어야 한다. 비고츠키(Vygotsky)는 근접 발달영역(the zone of proximal
development: ZPD)을“독자적으로 문제를 해결함으로써 결정되는 실제적 발달 수준과 성인의 안내나 보다 능력 있는 또래들과 협동하여
문제를 해결함으로써 결정되는 잠재적 발달 수준 간의 거리”로 규정한다. 근접발달 영역(the zone of proximal development)은
실제적 발달 수준(actualdevelopmental level)과 잠재적 발달수준(potential developmental level)사이를
역동적인 공간으로 이야기 한다. 즉 하브루타는 소속한 학습자(친구) 상호작용을 통하여 잠재적 발달수준(potential developmental
level)까지 도달하는 것이다. 이러한 작용에 꼭 필요한 것 중 하나는 학습자가 수업에 임하는 태도이다. 즉 학습자의 동기부여를 어떻게 불러일으킬
수 있느냐가 관권이다.'
- KTR이 영국의 CE인증기관인 HPi Verification Service Ltd(이하 HPiVS) 와 협력을 강화하기로 했다. 최형기 KTR
원장과 Mr. Alasdair Lewis Reay HPiVS 원장은 유럽으로 수출하는 압력플랜트 설비, 용접, 산업용 기계류에 대한 CE인증업무
협력을 위해 11월 25일 과천청사 5층 아리랑홀에서 협약을 체결했다. KTR은 국내 압력장비 및 기계류 인증 관련 업계의 인증 수요가 증가함에
따라, 현지 기관과의 업무협력을 강화해 인증사업 체계를 확립하기 위해 협약을 체결했다. 협약 체결 후 HPiVS는 KTR 과천청사 내 주요
시험실을 견학하며 연구원 현황을 파악하고 KTR과의 사업 협력 방안에 대해 논의하는 시간을 가졌다. HPiVS는 유럽위원회로부터 인정받은 영국의
유럽 인증기관으로서 플랜트 압력설비, 산업용 기계류, 레저용 장비, 단순압력장비 4개 제품군의 CE인증 권한을 지니고 있다.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-451949-bs64-book-50")
# Run inference
sentences = [
'실제적 발달 수준과 잠재적 발단 수준 사이를 역동적인 공간으로 이야기하는 영역은 뭐야?',
'하브루타에 임하는 학생들의 태도는 다양하다. 기본적인 학습에 대한 참여율에 따라 상당한 차이를 보인다. 앞에서 언급한 인재시교에 다다를 때까지 기다려주고 관심가져주며, 칭찬과 극려의 말로 지지 할 수 있어야 한다. 비고츠키(Vygotsky)는 근접 발달영역(the zone of proximal development: ZPD)을“독자적으로 문제를 해결함으로써 결정되는 실제적 발달 수준과 성인의 안내나 보다 능력 있는 또래들과 협동하여 문제를 해결함으로써 결정되는 잠재적 발달 수준 간의 거리”로 규정한다. 근접발달 영역(the zone of proximal development)은 실제적 발달 수준(actualdevelopmental level)과 잠재적 발달수준(potential developmental level)사이를 역동적인 공간으로 이야기 한다. 즉 하브루타는 소속한 학습자(친구) 상호작용을 통하여 잠재적 발달수준(potential developmental level)까지 도달하는 것이다. 이러한 작용에 꼭 필요한 것 중 하나는 학습자가 수업에 임하는 태도이다. 즉 학습자의 동기부여를 어떻게 불러일으킬 수 있느냐가 관권이다.',
'세 번째는 비공식적 및 공식적 지원 관점으로 아동기를 역동적인 관계의 복합체로 인식하며, 역동적인 상호관계는 만족스럽고 성공적인 아동기에 필수요소이다. 이러한 상호관계의 범위는 아동 양육과 보호의 주 제공자인 부모에서부터 아동 권리를 최종적으로 보장하는 역할을 하는 국가에까지 이른다. 아동에게 필수적인 지원과 서비스는 가족의 사회 관계망 및 가족과 지역사회를 통한 비공식적 지원이나 제 3섹터 및 영리 부문 및 국가와 기관들을 통한 공식적 지원으로 전달된다. 비공식적 및 공식적 지원은 아동이 필요로 하고 혜택을 받을 수 있는 지원과 서비스를 가능하게 하는 전달자 역할을 한다. 이러한 ‘사회적 자본’을 지원하는 것이 국가 아동 전략의 핵심 주제이다. 이렇게 다양하고 서로 상호작용하는 지원의 원천으로부터 아동은 앞서 말한 9개의 발달 영역에서 성장하기 위한 도움을 받는다. 모든 아동은 좋은 교육과 양질의 의료 서비스에 대한 접근권 등 기본적인 지원과 서비스를 필요로 한다. 일부 아동은 빈곤이나 장애, 소수 인종 및 문화 집단, 양육과 보호의 필요성, 비행 및 자해 행동 등을 이유로 추가적인 지원과 서비스를 필요로 한다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `max_steps`: 50
- `warmup_ratio`: 0.05
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: 50
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0019 | 1 | 0.9318 |
| 0.0037 | 2 | 0.9071 |
| 0.0056 | 3 | 0.9399 |
| 0.0075 | 4 | 0.8293 |
| 0.0094 | 5 | 0.7001 |
| 0.0112 | 6 | 0.6959 |
| 0.0131 | 7 | 0.5847 |
| 0.0150 | 8 | 0.4753 |
| 0.0169 | 9 | 0.5343 |
| 0.0187 | 10 | 0.4751 |
| 0.0206 | 11 | 0.4502 |
| 0.0225 | 12 | 0.4661 |
| 0.0243 | 13 | 0.4421 |
| 0.0262 | 14 | 0.4721 |
| 0.0281 | 15 | 0.4191 |
| 0.0300 | 16 | 0.4317 |
| 0.0318 | 17 | 0.4206 |
| 0.0337 | 18 | 0.3953 |
| 0.0356 | 19 | 0.3775 |
| 0.0375 | 20 | 0.307 |
| 0.0393 | 21 | 0.3553 |
| 0.0412 | 22 | 0.3592 |
| 0.0431 | 23 | 0.341 |
| 0.0449 | 24 | 0.4565 |
| 0.0468 | 25 | 0.3349 |
| 0.0487 | 26 | 0.3669 |
| 0.0506 | 27 | 0.35 |
| 0.0524 | 28 | 0.348 |
| 0.0543 | 29 | 0.3434 |
| 0.0562 | 30 | 0.3778 |
| 0.0581 | 31 | 0.3134 |
| 0.0599 | 32 | 0.3695 |
| 0.0618 | 33 | 0.3719 |
| 0.0637 | 34 | 0.3299 |
| 0.0655 | 35 | 0.3336 |
| 0.0674 | 36 | 0.3491 |
| 0.0693 | 37 | 0.3609 |
| 0.0712 | 38 | 0.2784 |
| 0.0730 | 39 | 0.3002 |
| 0.0749 | 40 | 0.3753 |
| 0.0768 | 41 | 0.26 |
| 0.0787 | 42 | 0.2543 |
| 0.0805 | 43 | 0.274 |
| 0.0824 | 44 | 0.2681 |
| 0.0843 | 45 | 0.2977 |
| 0.0861 | 46 | 0.281 |
| 0.0880 | 47 | 0.2937 |
| 0.0899 | 48 | 0.2997 |
| 0.0918 | 49 | 0.3303 |
| 0.0936 | 50 | 0.2493 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
OpenVINO/RedPajama-INCITE-Chat-3B-v1-int8-ov | OpenVINO | "2024-11-05T10:36:30Z" | 5 | 0 | transformers | [
"transformers",
"openvino",
"gpt_neox",
"text-generation",
"base_model:togethercomputer/RedPajama-INCITE-Chat-3B-v1",
"base_model:quantized:togethercomputer/RedPajama-INCITE-Chat-3B-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-03T10:05:09Z" | ---
license: apache-2.0
license_link: https://choosealicense.com/licenses/apache-2.0/
base_model:
- togethercomputer/RedPajama-INCITE-Chat-3B-v1
base_model_relation: quantized
---
# RedPajama-INCITE-Chat-3B-v1-int8-ov
* Model creator: [Togethercomputer](https://huggingface.co/togethercomputer)
* Original model: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1)
## Description
This is [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **int8_asym**
* ratio: **1**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2024.4.0 and higher
* Optimum Intel 1.20.0 and higher
## Running Model Inference
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/RedPajama-INCITE-Chat-3B-v1-int8-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
## Limitations
Check the original model card for [original model card](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) for limitations.
## Legal information
The original model is distributed under [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [original model card](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
KaiMilo/distilhubert-finetuned-gtzan | KaiMilo | "2023-09-23T19:33:58Z" | 163 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-09-20T02:20:59Z" | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6371
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0249 | 1.0 | 113 | 1.8360 | 0.43 |
| 1.3024 | 2.0 | 226 | 1.2179 | 0.61 |
| 0.9782 | 3.0 | 339 | 0.9286 | 0.74 |
| 0.8263 | 4.0 | 452 | 0.8332 | 0.76 |
| 0.7515 | 5.0 | 565 | 0.6887 | 0.82 |
| 0.4177 | 6.0 | 678 | 0.6159 | 0.83 |
| 0.4822 | 7.0 | 791 | 0.5960 | 0.84 |
| 0.2312 | 8.0 | 904 | 0.5989 | 0.85 |
| 0.3513 | 9.0 | 1017 | 0.6024 | 0.82 |
| 0.1244 | 10.0 | 1130 | 0.6371 | 0.83 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LHRuig/hotasinsx | LHRuig | "2025-01-16T07:38:49Z" | 11 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-16T07:38:17Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: hotasinsx
---
# hotasinsx
<Gallery />
## Model description
hotasinsx lora
## Trigger words
You should use `hotasinsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/hotasinsx/tree/main) them in the Files & versions tab.
|
angelinux/q-FrozenLake-v1-4x4-noSlippery | angelinux | "2022-07-01T15:29:15Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-07-01T15:27:17Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="angelinux/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
amal94/dqn-SpaceInvadersNoFrameskip-v4 | amal94 | "2023-02-03T01:28:51Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-03T01:28:05Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 400.00 +/- 112.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amal94 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga amal94 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga amal94
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
kgprince8209/ghibli | kgprince8209 | "2025-03-30T18:36:05Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-30T18:36:05Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
lisabdunlap/mistral_7b_4bit_instruct_fake_markdown | lisabdunlap | "2025-04-09T20:15:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-09T20:13:23Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
SpiderSteeped/wet | SpiderSteeped | "2024-12-18T00:45:35Z" | 63 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2024-12-18T00:43:37Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/_0e382f71-0669-42cf-bfcd-1a34e60f0f41.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: peed
---
# wet
<Gallery />
## Trigger words
You should use `peed` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/SpiderSteeped/wet/tree/main) them in the Files & versions tab.
|
Xu-Ouyang/pythia-410m-deduped-int8-step14000-GPTQ-wikitext2 | Xu-Ouyang | "2024-08-18T22:33:51Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | "2024-08-18T22:33:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
morturr/flan-t5-base-headlines-text-classification-2024-06-25-seed-28 | morturr | "2024-06-25T09:43:13Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-25T09:21:53Z" | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-headlines-text-classification-2024-06-25-seed-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-headlines-text-classification-2024-06-25-seed-28
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.1+cu121
- Datasets 2.10.1
- Tokenizers 0.15.2
|
Clevyby/Karen_TheEditor_V2_STRICT_Mistral_7B-Q5_K_S-GGUF | Clevyby | "2024-05-02T10:39:35Z" | 1 | 0 | null | [
"gguf",
"llm",
"llama",
"spellcheck",
"grammar",
"llama-cpp",
"gguf-my-repo",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-30T09:36:54Z" | ---
license: llama2
tags:
- llm
- llama
- spellcheck
- grammar
- llama-cpp
- gguf-my-repo
---
# Clevyby/Karen_TheEditor_V2_STRICT_Mistral_7B-Q5_K_S-GGUF
This model was converted to GGUF format from [`FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B`](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B) for more details on the model.
|
facebook/mms-tts-gqr | facebook | "2023-09-01T13:29:12Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-01T13:28:39Z" |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Gor Text-to-Speech
This repository contains the **Gor (gqr)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-gqr")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-gqr")
text = "some example text in the Gor language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
Techbro/Rahina | Techbro | "2024-01-24T17:59:33Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2024-01-24T17:59:30Z" | ---
license: bigscience-openrail-m
---
|
psimm/llama-3-8B-semeval2014 | psimm | "2024-07-14T13:57:36Z" | 3 | 1 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"en",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | "2024-06-15T10:46:43Z" | ---
license: other
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: NousResearch/Meta-Llama-3-8B
model-index:
- name: llama-3-8B-semeval2014
results: []
language:
- en
metrics:
- f1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: NousResearch/Meta-Llama-3-8B
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: semeval2014_train.jsonl
ds_type: json
type:
# JSONL file contains instruction, input, output fields per line.
# This gets mapped to the equivalent axolotl tags.
field_instruction: instruction
field_input: input
field_output: output
# Format is used by axolotl to generate the prompt.
format: |-
[INST] {input} [/INST]
tokens: # add new control tokens from the dataset to the model
- "[INST]"
- "[/INST]"
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./lora-out
sequence_len: 4096
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save: # required when adding new tokens to LLaMA/Mistral
- embed_tokens
- lm_head
wandb_project: absa-semeval2014
wandb_entity: psimm
wandb_log_model:
wandb_name: llama-3-8B-semeval2014
hub_model_id: psimm/llama-3-8B-semeval2014
gradient_accumulation_steps: 1
micro_batch_size: 32
num_epochs: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0001
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps: 0.05
eval_table_size:
eval_table_max_new_tokens: 128
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# llama-3-8B-semeval2014
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the SemEval2014 Task 4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0695
- F1 Score: 82.13
For more details, see my [article](https://simmering.dev/open-absa)
## Intended uses & limitations
Aspect-based sentiment analysis in English. Pass it review sentences wrapped in tags, like this: [INST]The cheeseburger was tasty but the fries were soggy.[/INST]
## How to run
This adapter requires that two new tokens are added to the tokenizer. The tokens are: "[INST]" and "[/INST]". Also, the base model's embedding layer size has to be increased by 2.
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
extra_tokens = ["[INST]", "[/INST]"]
base_model = "NousResearch/Meta-Llama-3-8B"
base_model = AutoModelForCausalLM.from_pretrained("NousResearch/Meta-Llama-3-8B")
base_model.resize_token_embeddings(base_model.config.vocab_size + len(extra_tokens))
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B")
tokenizer.add_special_tokens({"additional_special_tokens": extra_tokens})
model = PeftModel.from_pretrained(base_model, "psimm/llama-3-8B-semeval2014")
input_text = "[INST]The food was tasty[/INST]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
max_length=256,
temperature=0.01,
)
# Remove the input tokens
output_tokens = gen_tokens[:, input_ids.shape[1] :]
print(tokenizer.batch_decode(output_tokens, skip_special_tokens=True))
```
## Training and evaluation data
SemEval 2014 Task 4 reviews.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5408 | 0.0112 | 1 | 2.2742 |
| 0.1159 | 0.2022 | 18 | 0.1026 |
| 0.1028 | 0.4045 | 36 | 0.0762 |
| 0.0813 | 0.6067 | 54 | 0.0709 |
| 0.0908 | 0.8090 | 72 | 0.0665 |
| 0.0431 | 1.0112 | 90 | 0.0639 |
| 0.0275 | 1.2135 | 108 | 0.0663 |
| 0.0224 | 1.4157 | 126 | 0.0659 |
| 0.0349 | 1.6180 | 144 | 0.0637 |
| 0.0281 | 1.8202 | 162 | 0.0589 |
| 0.0125 | 2.0225 | 180 | 0.0592 |
| 0.0088 | 2.2247 | 198 | 0.0682 |
| 0.0076 | 2.4270 | 216 | 0.0666 |
| 0.01 | 2.6292 | 234 | 0.0654 |
| 0.0131 | 2.8315 | 252 | 0.0704 |
| 0.0075 | 3.0337 | 270 | 0.0679 |
| 0.002 | 3.2360 | 288 | 0.0688 |
| 0.0029 | 3.4382 | 306 | 0.0692 |
| 0.0009 | 3.6404 | 324 | 0.0694 |
| 0.0064 | 3.8427 | 342 | 0.0695 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
pat119/sentiment-classifier-distilgpt2 | pat119 | "2024-12-14T00:29:18Z" | 152 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-13T23:04:17Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilgpt2
tags:
- generated_from_trainer
model-index:
- name: sentiment-classifier-distilgpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-classifier-distilgpt2
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.2.2
- Datasets 3.2.0
- Tokenizers 0.21.0
|
akaistormherald/ToxicMist-v0.2-7B-DPO | akaistormherald | "2024-03-05T22:17:52Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:unsloth/zephyr-sft-bnb-4bit",
"base_model:finetune:unsloth/zephyr-sft-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-05T20:40:33Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- dpo
base_model: unsloth/zephyr-sft-bnb-4bit
datasets:
- unalignment/toxic-dpo-v0.2
---
# Uploaded model
- **Developed by:** akaistormherald
- **License:** apache-2.0
- **Finetuned from model :** unsloth/zephyr-sft-bnb-4bit
Mistral7b + SFT + 4bit DPO training with unalignment/toxic-dpo-v0.2 == ToxicMist? ☣🌫 |
Subsets and Splits