modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-01 06:27:12
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 405
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-01 06:27:11
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lmazzon70/blurr_IMDB_distilbert_classification | lmazzon70 | "2022-06-02T14:30:46Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2022-06-02T14:30:34Z" | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Shahradmz/llama8b_SEND_1B-alpaca-5 | Shahradmz | "2025-02-13T05:54:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-13T05:53:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mzwing/HassakuXL-Lightning-GGUF | mzwing | "2024-04-30T09:50:22Z" | 28 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-03-22T08:57:59Z" | ---
{}
---
Encounter some bugs, trying hard to fix it...
Until I fix it, it's recommended **not** to use it. |
Ricky080811/CompliAI_FullModel3 | Ricky080811 | "2024-04-23T20:43:35Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-23T19:40:21Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DavideEttori/colab_asr_model | DavideEttori | "2024-04-07T10:06:47Z" | 81 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:minds14",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-07T10:04:18Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- minds14
model-index:
- name: colab_asr_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# colab_asr_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
LHRuig/joaquimsx | LHRuig | "2025-02-18T09:42:51Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-02-18T09:42:14Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: joaquimsx
---
# joaquimsx
<Gallery />
## Model description
joaquimsx lora
## Trigger words
You should use `joaquimsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/joaquimsx/tree/main) them in the Files & versions tab.
|
ardaspear/03a618c1-5add-42e4-a59b-f8fdd2d2a5b0 | ardaspear | "2025-01-30T22:10:38Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | "2025-01-30T22:08:38Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 03a618c1-5add-42e4-a59b-f8fdd2d2a5b0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d6f71c1fcd1498e2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d6f71c1fcd1498e2_train_data.json
type:
field_input: topics
field_instruction: content
field_output: code_prompt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/03a618c1-5add-42e4-a59b-f8fdd2d2a5b0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/d6f71c1fcd1498e2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 90bcdc9c-e5a4-4355-ad92-58c883587eb0
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 90bcdc9c-e5a4-4355-ad92-58c883587eb0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 03a618c1-5add-42e4-a59b-f8fdd2d2a5b0
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 12.4290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0179 | 1 | 12.4331 |
| 12.4336 | 0.1607 | 9 | 12.4329 |
| 12.4304 | 0.3214 | 18 | 12.4325 |
| 12.431 | 0.4821 | 27 | 12.4320 |
| 12.4322 | 0.6429 | 36 | 12.4315 |
| 12.4293 | 0.8036 | 45 | 12.4308 |
| 12.4282 | 0.9643 | 54 | 12.4303 |
| 12.4279 | 1.125 | 63 | 12.4297 |
| 12.4283 | 1.2857 | 72 | 12.4294 |
| 12.4286 | 1.4464 | 81 | 12.4291 |
| 12.4294 | 1.6071 | 90 | 12.4290 |
| 12.4277 | 1.7679 | 99 | 12.4290 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Word2vec/nlpl_30 | Word2vec | "2023-07-04T15:02:49Z" | 0 | 0 | null | [
"word2vec",
"grc",
"dataset:Ancient_Greek_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | "2023-07-04T10:11:14Z" | ---
language: grc
license: cc-by-4.0
tags:
- word2vec
datasets: Ancient_Greek_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 45742 corresponding to 6876502 tokens from the dataset `Ancient_Greek_CoNLL17_corpus`.
The model is trained with the following properties: no lemmatization and postag with the algorith Word2Vec Continuous Skipgram with window of 10 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_30", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/30.zip |
chintagunta85/electramed-small-SPECIES800-ner | chintagunta85 | "2022-08-24T06:39:16Z" | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:species_800",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-08-24T06:32:07Z" | ---
tags:
- generated_from_trainer
datasets:
- species_800
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-SPECIES800-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: species_800
type: species_800
config: species_800
split: train
args: species_800
metrics:
- name: Precision
type: precision
value: 0.6221498371335505
- name: Recall
type: recall
value: 0.7470664928292047
- name: F1
type: f1
value: 0.6789099526066352
- name: Accuracy
type: accuracy
value: 0.9831434110359828
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-SPECIES800-ner
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the species_800 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0513
- Precision: 0.6221
- Recall: 0.7471
- F1: 0.6789
- Accuracy: 0.9831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0536 | 1.0 | 359 | 0.0971 | 0.6138 | 0.5554 | 0.5832 | 0.9795 |
| 0.0309 | 2.0 | 718 | 0.0692 | 0.6175 | 0.6063 | 0.6118 | 0.9808 |
| 0.0563 | 3.0 | 1077 | 0.0582 | 0.6424 | 0.6910 | 0.6658 | 0.9819 |
| 0.0442 | 4.0 | 1436 | 0.0553 | 0.5900 | 0.7523 | 0.6613 | 0.9814 |
| 0.0069 | 5.0 | 1795 | 0.0511 | 0.6291 | 0.7497 | 0.6841 | 0.9827 |
| 0.0141 | 6.0 | 2154 | 0.0505 | 0.6579 | 0.7471 | 0.6996 | 0.9837 |
| 0.0052 | 7.0 | 2513 | 0.0513 | 0.5965 | 0.7458 | 0.6628 | 0.9826 |
| 0.0573 | 8.0 | 2872 | 0.0509 | 0.6140 | 0.7445 | 0.6730 | 0.9828 |
| 0.0203 | 9.0 | 3231 | 0.0516 | 0.6118 | 0.7458 | 0.6722 | 0.9830 |
| 0.0101 | 10.0 | 3590 | 0.0513 | 0.6221 | 0.7471 | 0.6789 | 0.9831 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AbhishekkV19/bert-base-cased-5kvul-10aug-3nsfw-10w-exp-10ep-s42-2 | AbhishekkV19 | "2023-12-21T10:43:54Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-12-20T06:57:33Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-5kvul-10aug-3nsfw-10w-exp-10ep-s42-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-5kvul-10aug-3nsfw-10w-exp-10ep-s42-2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0011
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2869 | 1.0 | 607 | 0.0071 | 1.0 |
| 0.2246 | 2.0 | 1214 | 0.0061 | 1.0 |
| 0.1718 | 3.0 | 1821 | 0.0043 | 1.0 |
| 0.1116 | 4.0 | 2428 | 0.0011 | 1.0 |
| 0.0497 | 5.0 | 3035 | 0.0008 | 1.0 |
| 0.0359 | 6.0 | 3642 | 0.0011 | 1.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Youssef1234/whisper-base-specAug | Youssef1234 | "2024-05-31T18:22:08Z" | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:Youssef1234/whisper-base-en-native",
"base_model:finetune:Youssef1234/whisper-base-en-native",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-31T14:09:15Z" | ---
license: apache-2.0
base_model: Youssef1234/whisper-base-en-native
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-specAug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-specAug
This model is a fine-tuned version of [Youssef1234/whisper-base-en-native](https://huggingface.co/Youssef1234/whisper-base-en-native) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3759
- Wer: 16.4211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0636 | 0.16 | 200 | 0.3404 | 15.5724 |
| 0.0404 | 0.32 | 400 | 0.3638 | 15.9867 |
| 0.0345 | 0.48 | 600 | 0.3759 | 16.4211 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.15.2
|
mradermacher/reddit-financial_wisdom-GGUF | mradermacher | "2024-11-08T08:28:02Z" | 37 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:kowalsky/reddit-financial_wisdom",
"base_model:quantized:kowalsky/reddit-financial_wisdom",
"endpoints_compatible",
"region:us"
] | null | "2024-11-08T08:26:48Z" | ---
base_model: kowalsky/reddit-financial_wisdom
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kowalsky/reddit-financial_wisdom
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/reddit-financial_wisdom-GGUF/resolve/main/reddit-financial_wisdom.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
danielkosyra/testdatasetpretraining | danielkosyra | "2024-05-13T10:02:05Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-05T14:35:30Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: testdatasetpretraining
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testdatasetpretraining
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 320
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 1250
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 5.7934 | 2.4777 | 250 | 4.3840 |
| 3.6869 | 4.9554 | 500 | 3.6285 |
| 3.0053 | 7.4331 | 750 | 3.3747 |
| 2.6172 | 9.9108 | 1000 | 3.2880 |
| 2.3081 | 12.3885 | 1250 | 3.3016 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Casual-Autopsy/snowflake-arctic-embed-l-v2.0-gguf | Casual-Autopsy | "2025-02-06T19:44:38Z" | 1,708 | 2 | sentence-transformers | [
"sentence-transformers",
"gguf",
"xlm-roberta",
"mteb",
"arctic",
"snowflake-arctic-embed",
"text-embeddings-inference",
"sentence-similarity",
"af",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"gl",
"gu",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ky",
"lo",
"lt",
"lv",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"pa",
"pl",
"pt",
"qu",
"ro",
"ru",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"uk",
"ur",
"vi",
"yo",
"zh",
"arxiv:2412.04506",
"base_model:Snowflake/snowflake-arctic-embed-l-v2.0",
"base_model:quantized:Snowflake/snowflake-arctic-embed-l-v2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | sentence-similarity | "2025-02-06T19:16:16Z" | ---
base_model:
- Snowflake/snowflake-arctic-embed-l-v2.0
pipeline_tag: sentence-similarity
tags:
- xlm-roberta
- mteb
- arctic
- snowflake-arctic-embed
- text-embeddings-inference
library_name: sentence-transformers
language:
- af
- ar
- az
- be
- bg
- bn
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- gl
- gu
- he
- hi
- hr
- ht
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ky
- lo
- lt
- lv
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- pa
- pl
- pt
- qu
- ro
- ru
- si
- sk
- sl
- so
- sq
- sr
- sv
- sw
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- yo
- zh
---
GGUF quants of [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) created using [llama.cpp](https://github.com/ggerganov/llama.cpp)
Original model card:
***
<h1 align="center">Snowflake's Arctic-embed-l-v2.0</h1>
<h4 align="center">
<p>
<a href=#news>News</a> |
<a href=#models>Models</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#contact">Contact</a> |
<a href="#faq">FAQ</a>
<a href="#license">License</a> |
<a href="#acknowledgement">Acknowledgement</a>
<p>
</h4>
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=18f5b1a3-da66-4f25-92d3-21da829509c3" />
## News
- 12/11/2024: Release of [Technical Report](https://arxiv.org/abs/2412.04506)
- 12/04/2024: Release of [snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) and [snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) our newest models with multilingual workloads in mind.
## Models
Snowflake arctic-embed-l-v2.0 is the newest addition to the suite of embedding models Snowflake has released optimizing for retrieval performance and inference efficiency.
Arctic Embed 2.0 introduces a new standard for multilingual embedding models, combining high-quality multilingual text retrieval without sacrificing performance in English.
Released under the permissive Apache 2.0 license, Arctic Embed 2.0 is ideal for applications that demand reliable, enterprise-grade multilingual search and retrieval at scale.
Key Features:
1. Multilingual without compromise: Excels in English and non-English retrieval, outperforming leading open-source and proprietary models on benchmarks like MTEB Retrieval, CLEF, and MIRACL.
2. Inference efficiency: Its 303m non-embedding parameters inference is fast and efficient for any scale.
3. Compression-friendly: Achieves high-quality retrieval with embeddings as small as 128 bytes/vector using Matryoshka Representation Learning (MRL) and quantization-aware embedding training.
4. Drop-In Replacement: arctic-embed-l-v2.0 builds on [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) which allows direct drop-in inference replacement with any form of new libraries, kernels, inference engines etc.
5. Long Context Support: arctic-embed-l-v2.0 builds on [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) which can support a context window of up to 8192 via the use of RoPE.
### Quality Benchmarks
Unlike most other open-source models, Arctic-embed-l-v2.0 excels across English (via MTEB Retrieval) and multilingual (via MIRACL and CLEF).
You no longer need to support models to empower high-quality English and multilingual retrieval. All numbers mentioned below are the average NDCG@10 across the dataset being discussed.
| Model Name | # params | # non-emb params | # dimensions | BEIR (15) | MIRACL (4) | CLEF (Focused) | CLEF (Full) |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| **snowflake-arctic-l-v2.0** | 568M | 303M | 1024 | **55.6** | 55.8 | **52.9** | **54.3** |
| snowflake-arctic-m | 109M | 86M | 768 | 54.9 | 24.9 | 34.4 | 29.1 |
| snowflake-arctic-l | 335M | 303M | 1024 | 56.0 | 34.8 | 38.2 | 33.7 |
| me5 base | 560M | 303M | 1024 | 51.4 | 54.0 | 43.0 | 34.6 |
| bge-m3 (BAAI) | 568M | 303M | 1024 | 48.8 | **56.8** | 40.8 | 41.3 |
| gte (Alibaba) | 305M | 113M | 768 | 51.1 | 52.3 | 47.7 | 53.1 |
Aside from high-quality retrieval arctic delivers embeddings that are easily compressible. Leverage vector truncation via MRL to decrease vector size by 4x with less than 3% degredation in quality.
Combine MRLed vectors with vector compression (Int4) to power retrieval in 128 bytes per doc.
| Model | | BEIR (15) | Relative Performance | MIRACL (4) | Relative Performance | CLEF (5) | Relative Performance | CLEF (Full) | Relative Performance |
|---|---|:---:|:---:|:---:|:---:|:---:|---|---|---|
| snowflake-arctic-l-v2.0 | 1024 | 55.6 | N/A | 55.8 | N/A | 52.9 | N/A | 54.3 | N/A |
| snowflake-arctic-l-v2.0 | 256 | 54.3 | -0.18% | 54.3 | -2.70% | 51.9 | -1.81% | 53.4 | -1.53% |
## Usage
### Using Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
# Load the model
model_name = 'Snowflake/snowflake-arctic-embed-l-v2.0'
model = SentenceTransformer(model_name)
# Define the queries and documents
queries = ['what is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']
# Compute embeddings: use `prompt_name="query"` to encode queries!
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Compute cosine similarity scores
scores = model.similarity(query_embeddings, document_embeddings)
# Output the results
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
### Using Huggingface Transformers
You can use the transformers package to use Snowflake's arctic-embed model, as shown below. For optimal retrieval quality, use the CLS token to embed each text portion and use the query prefix below (just on the query).
```python
import torch
from transformers import AutoModel, AutoTokenizer
model_name = 'Snowflake/snowflake-arctic-embed-l-v2.0'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, add_pooling_layer=False)
model.eval()
query_prefix = 'query: '
queries = ['what is snowflake?', 'Where can I get the best tacos?']
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=8192)
documents = ['The Data Cloud!', 'Mexico City of Course!']
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=8192)
# Compute token embeddings
with torch.no_grad():
query_embeddings = model(**query_tokens)[0][:, 0]
document_embeddings = model(**document_tokens)[0][:, 0]
# normalize embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1)
scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1))
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
This should produce the following scores
```
Query: what is snowflake?
tensor(0.2715) The Data Cloud!
tensor(0.0661) Mexico City of Course!
Query: Where can I get the best tacos?
tensor(0.2797) Mexico City of Course!
tensor(0.1250) The Data Cloud!
```
### Using Huggingface Transformers.js
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
You can then use the model for retrieval, as follows:
```js
import { pipeline, dot } from '@huggingface/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-m-v2.0', {
dtype: 'q8',
});
// Generate sentence embeddings
const sentences = [
'query: what is snowflake?',
'The Data Cloud!',
'Mexico City of Course!',
]
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => dot(source_embeddings, x));
console.log(similarities); // [0.24783534471401417, 0.05313122704326892]
```
## Contact
Feel free to open an issue or pull request if you have any questions or suggestions about this project.
You also can email Daniel Campos([email protected]).
## License
Arctic is licensed under the [Apache-2](https://www.apache.org/licenses/LICENSE-2.0). The released models can be used for commercial purposes free of charge. |
sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat-200k | sanchit-gandhi | "2024-04-26T14:35:59Z" | 138 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat",
"base_model:finetune:sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-26T13:56:34Z" | ---
base_model: sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- alignment-handbook
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat
This model is a fine-tuned version of [sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat](https://huggingface.co/sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1709 | 1.0 | 545 | 1.1553 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
teven/cross_all-mpnet-base-v2_finetuned_WebNLG2017 | teven | "2022-09-20T12:53:00Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-09-20T12:52:53Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/cross_all-mpnet-base-v2_finetuned_WebNLG2017
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2017')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2017')
model = AutoModel.from_pretrained('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2017')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/cross_all-mpnet-base-v2_finetuned_WebNLG2017)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
kaitchup/Llama-2-7B-oasstguanaco-adapter-1e | kaitchup | "2024-04-18T19:27:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-18T19:27:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChrisKalahiki/ppo-LunarLander-v2 | ChrisKalahiki | "2024-01-11T20:01:30Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-11T20:01:07Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.31 +/- 12.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Laurie/qlora-v1 | Laurie | "2023-06-15T04:10:06Z" | 0 | 0 | null | [
"en",
"dataset:tatsu-lab/alpaca",
"license:apache-2.0",
"region:us"
] | null | "2023-06-01T10:15:29Z" | ---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
language:
- en
---
model_name_or_path = yahma/llama-13b-hf
learning_rate = 1e-4
max_steps = 10000
epoch = 3.08
train_loss = 0.7169
train_runtime = 1 day, 7:23:32.67
train_samples_per_second = 1.416
train_steps_per_second = 0.088 |
vjkrish/taxi-v3 | vjkrish | "2023-01-06T15:46:22Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-06T15:46:16Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vjkrish/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
apple/MobileCLIP-S2 | apple | "2025-02-28T18:39:27Z" | 42 | 6 | mobileclip | [
"mobileclip",
"arxiv:2311.17049",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | "2024-03-06T17:14:03Z" | ---
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
library_name: mobileclip
---
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
This repository contains the **MobileCLIP-S2** checkpoint.

### Highlights
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
## Checkpoints
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
## How to Use
First, download the desired checkpoint visiting one of the links in the table above, then click the `Files and versions` tab, and download the PyTorch checkpoint.
For programmatic downloading, if you have `huggingface_hub` installed, you can also run:
```
huggingface-cli download pcuenq/MobileCLIP-S2
```
Then, install [`ml-mobileclip`](https://github.com/apple/ml-mobileclip) by following the instructions in the repo. It uses an API similar to [`open_clip`'s](https://github.com/mlfoundations/open_clip).
You can run inference with a code snippet like the following:
```py
import torch
from PIL import Image
import mobileclip
model, _, preprocess = mobileclip.create_model_and_transforms('mobileclip_s2', pretrained='/path/to/mobileclip_s2.pt')
tokenizer = mobileclip.get_tokenizer('mobileclip_s2')
image = preprocess(Image.open("docs/fig_accuracy_latency.png").convert('RGB')).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs)
```
|
etri-vilab/Ko-LLaVA-13b | etri-vilab | "2024-01-25T02:44:44Z" | 23 | 6 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"LLaVA",
"KOALA",
"Ko-LLaVA",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-24T02:30:36Z" | ---
language:
- ko
pipeline_tag: text-generation
tags:
- LLaVA
- KOALA
- Ko-LLaVA
---
# Ko-LLaVA-13b : Korean Large Language and Vision Assistant for image description
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/lhf2gtwmtxhrtbupj40ra/img-kollava-i2t.png?rlkey=f3ezc1e4ka58t8qla1zls4qyn&dl=1" width="1024px" />
</div>
<!--
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/5j6m5xhyays58acvr8r1e/img-kollava-v2t.png?rlkey=j03b9t6g7gejcdes68t1lspjr&dl=1" width="1024px" />
</div>
-->
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Acknowledgement
This work was supported by the Institute of Information & communications Technology Planning & Evaluation(IITP) grants funded by the Korea government(MSIT) (No. 2022- 0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration) and (No. RS- 2022-00187238, Development of Large Korean Language Model Technology for Efficient Pre-training).
## Contact
Yong-Ju Lee([email protected]) |
LucileFavero/AM_model_AAEC_segm_1 | LucileFavero | "2025-01-15T06:22:49Z" | 24 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-15T06:21:48Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LucileFavero
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luaqi/phi_03011 | luaqi | "2025-03-01T12:47:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-01T12:29:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RapidOrc121/BERT_sentiment_analysis | RapidOrc121 | "2024-02-08T08:43:36Z" | 10 | 2 | bertopic | [
"bertopic",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:carblacac/twitter-sentiment-analysis",
"region:us"
] | text-classification | "2024-01-26T17:56:23Z" | ---
datasets:
- carblacac/twitter-sentiment-analysis
language:
- en
library_name: bertopic
pipeline_tag: text-classification
---
LABEL_0="sadness"
LABEL_1="joy"
LABEL_2="love"
LABEL_3="anger"
LABEL_4="fear"
LABEL_5="surprise" |
debussyman/whisper-small-hi | debussyman | "2024-04-24T22:34:34Z" | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-24T22:02:19Z" | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small En - Test Run
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small En - Test Run
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the My Voice Test Run dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Wer: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:---:|
| 0.0 | 1000.0 | 1000 | 0.0000 | 0.0 |
| 0.0 | 2000.0 | 2000 | 0.0000 | 0.0 |
| 0.0 | 3000.0 | 3000 | 0.0000 | 0.0 |
| 0.0 | 4000.0 | 4000 | 0.0000 | 0.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
huggingtweets/bbcqos | huggingtweets | "2021-08-17T10:52:33Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/bbcqos/1629197549972/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1073647682487410688/2yrbD4RY_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">♠Jenny Summers♠</div>
<div style="text-align: center; font-size: 14px;">@bbcqos</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ♠Jenny Summers♠.
| Data | ♠Jenny Summers♠ |
| --- | --- |
| Tweets downloaded | 882 |
| Retweets | 55 |
| Short tweets | 483 |
| Tweets kept | 344 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2uwts9v5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bbcqos's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1shy0ous) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1shy0ous/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bbcqos')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
google/t5-efficient-small-el4 | google | "2023-01-24T16:49:01Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-EL4 (Deep-Narrow version)
T5-Efficient-SMALL-EL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-el4** - is of model type **Small** with the following variations:
- **el** is **4**
It has **54.23** million parameters and thus requires *ca.* **216.9 MB** of memory in full precision (*fp32*)
or **108.45 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
emilykang/Gemma_medquad-symptoms_lora | emilykang | "2024-05-16T14:33:33Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-05-16T11:56:39Z" | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: Gemma_medquad-symptoms_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_medquad-symptoms_lora
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.0.1+cu117
- Datasets 2.19.0
- Tokenizers 0.19.1 |
DKYoon/gemma-2-2b-it-seahorse4dpo | DKYoon | "2024-11-21T10:14:32Z" | 93 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-21T10:10:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-8_0bpw_exl2 | Zoyd | "2024-06-04T13:49:39Z" | 8 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"32 bit upscale",
"full 32 bit precision",
"master files",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-04T13:35:47Z" | ---
license: apache-2.0
language:
- en
tags:
- 32 bit upscale
- full 32 bit precision
- master files
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-2_2bpw_exl2)**</center> | <center>5594 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-2_5bpw_exl2)**</center> | <center>6297 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-3_0bpw_exl2)**</center> | <center>7470 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-3_5bpw_exl2)**</center> | <center>8640 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-3_75bpw_exl2)**</center> | <center>9228 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-4_0bpw_exl2)**</center> | <center>9813 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-4_25bpw_exl2)**</center> | <center>10398 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-5_0bpw_exl2)**</center> | <center>12155 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-6_0bpw_exl2)**</center> | <center>14506 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-6_5bpw_exl2)**</center> | <center>15688 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/DavidAU_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-8_0bpw_exl2)**</center> | <center>16737 MB</center> | <center>8</center> |
<h3> Master Files for Ultra High Quality Remasters of "Psyonic-Cetacean" 20B </h3>
<img src="science-mad.jpg" width=300px height=300px style="float:right; padding:5px;">
May "Space Whale" swim in the oceans of the universe forever!
This repo contains the full precision (32 bit) master files for 32 bit upscales created by "DavidAU" of:
https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF-imatrix
And
https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF
Please view either repo for details on the remaster's results, and other important infomation.
<font color="red"><B>IMPORTANT NOTES For Maximum Results: </b></font>
These are "final" result files of the full precision rebuild (including end result merge(s)) minus
GGUF and Imatrix level upscaling / adjustments which occuring during "GGUFing" processes.
If you use these to create your own GGUFs, please use "outfile" at F32 for best results. If
you use F16 this will reduce the quality by a factor of 2 or higher.
Imatrix processes should use a stable dataset(s) of at least 500 "chunks" or more.
If smaller dataset(s) are used this may corrupt or reduce the quality of the Imatrix builds.
Due to the precision remaster there will be "greater" distance between each quant - both
non imatrix and imatrix.
IE: The jump in quality, instruction following, "ai brainpower", nuance and output
between Q4 and Q5 and likewise Q5 and Q6 will be larger than normal.
Same applies to "Imatrix" quants.
In addition there will also be differences between exact Imatrix and non-imatrix quants
especially in terms of "creative uses" and/or uses where there is no "right answer".
Finally, in terms of prompts:
You may find longer prompts are no longer required and/or you may need to reduce the size
of prompts in usage. This is a factor due to the precision upscale.
Doing this will ensure the quality of the upscale is maximized in the GGUFs.
/* GPTQers:
Suggest 4bit-Act32 TRUE for best results.
/* EXL2ers:
Suggest Min 4.5 BPW or higher ; 6 BPW and up is especially potent.
Strongly suggest you do not reduce layer bit count, as this will affect depth and nuance.
The more BPW the better.
Happy GGUFing, EXL2ing, GPTQing, AWQing, HQQing and of course "Merging".
<b>LONG LIVE OPEN SOURCE!</B>
<I>DavidAU</I>
/* Drop me a note when up, and I will link the masters to your repos. |
mradermacher/llama-3-wissenschaft-8B-i1-GGUF | mradermacher | "2025-01-18T07:24:06Z" | 625 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/llama-3-wissenschaft-8B",
"base_model:quantized:nbeerbower/llama-3-wissenschaft-8B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-18T06:26:16Z" | ---
base_model: nbeerbower/llama-3-wissenschaft-8B
language:
- en
library_name: transformers
license: other
license_name: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/llama-3-wissenschaft-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-wissenschaft-8B-i1-GGUF/resolve/main/llama-3-wissenschaft-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
serkanBurakOrs/rl_course_vizdoom_health_gathering_supreme | serkanBurakOrs | "2023-05-22T12:54:02Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T12:53:55Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.02 +/- 4.52
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r serkanBurakOrs/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
TFOCUS/The-Immovable-Wisdom-Kin_10 | TFOCUS | "2025-02-11T18:52:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-11T17:08:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
2010b9/whisper-tiny-finetuned-gtzan | 2010b9 | "2024-01-08T14:35:12Z" | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-01-08T14:09:55Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-tiny-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6410
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4155 | 1.0 | 225 | 1.1018 | 0.72 |
| 0.7952 | 2.0 | 450 | 1.0331 | 0.68 |
| 0.5194 | 3.0 | 675 | 0.5951 | 0.82 |
| 0.7917 | 4.0 | 900 | 0.5434 | 0.87 |
| 0.0147 | 5.0 | 1125 | 0.6317 | 0.86 |
| 0.006 | 6.0 | 1350 | 0.8828 | 0.85 |
| 0.0015 | 7.0 | 1575 | 0.7362 | 0.87 |
| 0.0009 | 8.0 | 1800 | 0.6188 | 0.91 |
| 0.0008 | 9.0 | 2025 | 0.6527 | 0.91 |
| 0.0007 | 10.0 | 2250 | 0.6410 | 0.91 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
yujiepan/FLUX.1-dev-tiny-random | yujiepan | "2024-12-01T18:29:25Z" | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | text-to-image | "2024-12-01T18:23:42Z" | ---
library_name: diffusers
---
# yujiepan/FLUX.1-dev-tiny-random
This pipeline is intended for debugging. It is adapted from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) with smaller size and randomly initialized parameters.
## Usage
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("yujiepan/FLUX.1-dev-tiny-random", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
# image.save("flux-dev.png")
```
## Codes
```python
import importlib
import torch
import transformers
import diffusers
import rich
def get_original_model_configs(
pipeline_cls: type[diffusers.FluxPipeline],
pipeline_id: str
):
pipeline_config: dict[str, list[str]] = \
pipeline_cls.load_config(pipeline_id)
model_configs = {}
for subfolder, import_strings in pipeline_config.items():
if subfolder.startswith("_"):
continue
module = importlib.import_module(".".join(import_strings[:-1]))
cls = getattr(module, import_strings[-1])
if issubclass(cls, transformers.PreTrainedModel):
config_class: transformers.PretrainedConfig = cls.config_class
config = config_class.from_pretrained(
pipeline_id, subfolder=subfolder)
model_configs[subfolder] = config
elif issubclass(cls, diffusers.ModelMixin) and issubclass(cls, diffusers.ConfigMixin):
config = cls.load_config(pipeline_id, subfolder=subfolder)
model_configs[subfolder] = config
elif subfolder in ['scheduler', 'tokenizer', 'tokenizer_2', 'tokenizer_3']:
pass
else:
raise NotImplementedError(f"unknown {subfolder}: {import_strings}")
return model_configs
def load_pipeline(pipeline_cls: type[diffusers.DiffusionPipeline], pipeline_id: str, model_configs: dict[str, dict]):
pipeline_config: dict[str, list[str]
] = pipeline_cls.load_config(pipeline_id)
components = {}
for subfolder, import_strings in pipeline_config.items():
if subfolder.startswith("_"):
continue
module = importlib.import_module(".".join(import_strings[:-1]))
cls = getattr(module, import_strings[-1])
print(f"Loading:", ".".join(import_strings))
if issubclass(cls, transformers.PreTrainedModel):
config = model_configs[subfolder]
component = cls(config)
elif issubclass(cls, transformers.PreTrainedTokenizerBase):
component = cls.from_pretrained(pipeline_id, subfolder=subfolder)
elif issubclass(cls, diffusers.ModelMixin) and issubclass(cls, diffusers.ConfigMixin):
config = model_configs[subfolder]
component = cls.from_config(config)
elif issubclass(cls, diffusers.SchedulerMixin) and issubclass(cls, diffusers.ConfigMixin):
component = cls.from_pretrained(pipeline_id, subfolder=subfolder)
else:
raise (f"unknown {subfolder}: {import_strings}")
components[subfolder] = component
if 'transformer' in component.__class__.__name__.lower():
print(component)
pipeline = pipeline_cls(**components)
return pipeline
def get_pipeline():
torch.manual_seed(42)
pipeline_id = "black-forest-labs/FLUX.1-dev"
pipeline_cls = diffusers.FluxPipeline
model_configs = get_original_model_configs(pipeline_cls, pipeline_id)
HIDDEN_SIZE = 8
model_configs["text_encoder"].hidden_size = HIDDEN_SIZE
model_configs["text_encoder"].intermediate_size = HIDDEN_SIZE * 2
model_configs["text_encoder"].num_attention_heads = 2
model_configs["text_encoder"].num_hidden_layers = 2
model_configs["text_encoder"].projection_dim = HIDDEN_SIZE
model_configs["text_encoder_2"].d_model = HIDDEN_SIZE
model_configs["text_encoder_2"].d_ff = HIDDEN_SIZE * 2
model_configs["text_encoder_2"].d_kv = HIDDEN_SIZE // 2
model_configs["text_encoder_2"].num_heads = 2
model_configs["text_encoder_2"].num_layers = 2
model_configs["transformer"]["num_layers"] = 2
model_configs["transformer"]["num_single_layers"] = 4
model_configs["transformer"]["num_attention_heads"] = 2
model_configs["transformer"]["attention_head_dim"] = HIDDEN_SIZE
model_configs["transformer"]["pooled_projection_dim"] = HIDDEN_SIZE
model_configs["transformer"]["joint_attention_dim"] = HIDDEN_SIZE
model_configs["transformer"]["axes_dims_rope"] = (4, 2, 2)
# model_configs["transformer"]["caption_projection_dim"] = HIDDEN_SIZE
model_configs["vae"]["layers_per_block"] = 1
model_configs["vae"]["block_out_channels"] = [HIDDEN_SIZE] * 4
model_configs["vae"]["norm_num_groups"] = 2
model_configs["vae"]["latent_channels"] = 16
pipeline = load_pipeline(pipeline_cls, pipeline_id, model_configs)
return pipeline
pipe = get_pipeline()
pipe = pipe.to(torch.bfloat16)
from pathlib import Path
save_folder = '/tmp/yujiepan/FLUX.1-dev-tiny-random'
Path(save_folder).mkdir(parents=True, exist_ok=True)
pipe.save_pretrained(save_folder)
pipe = diffusers.FluxPipeline.from_pretrained(save_folder, torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
configs = get_original_model_configs(diffusers.FluxPipeline, save_folder)
rich.print(configs)
pipe.push_to_hub(save_folder.removeprefix('/tmp/'))
```
|
Triangle104/Bellatrix-Tiny-3B-R1-Q5_K_M-GGUF | Triangle104 | "2025-02-02T04:07:31Z" | 24 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"llama3.2",
"Reinforcement learning",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:prithivMLmods/Bellatrix-Tiny-3B-R1",
"base_model:quantized:prithivMLmods/Bellatrix-Tiny-3B-R1",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-02T04:06:18Z" | ---
license: llama3.2
language:
- en
base_model: prithivMLmods/Bellatrix-Tiny-3B-R1
library_name: transformers
tags:
- trl
- llama3.2
- Reinforcement learning
- llama-cpp
- gguf-my-repo
---
# Triangle104/Bellatrix-Tiny-3B-R1-Q5_K_M-GGUF
This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-3B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-3B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-3B-R1) for more details on the model.
---
Bellatrix is based on a reasoning-based model designed for the DeepSeek-R1
synthetic dataset entries. The pipeline's instruction-tuned, text-only
models are optimized for multilingual dialogue use cases, including
agentic retrieval and summarization tasks. These models outperform many
of the available open-source options. Bellatrix is an auto-regressive
language model that uses an optimized transformer architecture. The
tuned versions utilize supervised fine-tuning (SFT) and reinforcement
learning with human feedback (RLHF).
Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via:
pip install --upgrade transformers
import torch
from transformers import pipeline
model_id = "prithivMLmods/Bellatrix-Tiny-3B-R1"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantization, and more at huggingface-llama-recipes.
Intended Use
Bellatrix is designed for applications that require advanced
reasoning and multilingual dialogue capabilities. It is particularly
suitable for:
Agentic Retrieval: Enabling intelligent retrieval of relevant information in a dialogue or query-response system.
Summarization Tasks: Condensing large bodies of text into concise summaries for easier comprehension.
Multilingual Use Cases: Supporting conversations in multiple languages with high accuracy and coherence.
Instruction-Based Applications: Following complex, context-aware instructions to generate precise outputs in a variety of scenarios.
Limitations
Despite its capabilities, Bellatrix has some limitations:
Domain Specificity: While it performs well on general tasks, its performance may degrade with highly specialized or niche datasets.
Dependence on Training Data: It is only as good as the quality and diversity of its training data, which may lead to biases or inaccuracies.
Computational Resources: The model’s optimized
transformer architecture can be resource-intensive, requiring
significant computational power for fine-tuning and inference.
Language Coverage: While multilingual, some languages or dialects may have limited support or lower performance compared to widely used ones.
Real-World Contexts: It may struggle with understanding nuanced or ambiguous real-world scenarios not covered during training.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Bellatrix-Tiny-3B-R1-Q5_K_M-GGUF --hf-file bellatrix-tiny-3b-r1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Bellatrix-Tiny-3B-R1-Q5_K_M-GGUF --hf-file bellatrix-tiny-3b-r1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Bellatrix-Tiny-3B-R1-Q5_K_M-GGUF --hf-file bellatrix-tiny-3b-r1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Bellatrix-Tiny-3B-R1-Q5_K_M-GGUF --hf-file bellatrix-tiny-3b-r1-q5_k_m.gguf -c 2048
```
|
unsloth/granite-3.2-8b-instruct-unsloth-bnb-4bit | unsloth | "2025-03-05T03:06:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"language",
"granite-3.2",
"ganite",
"conversational",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-3.2-8b-instruct",
"base_model:quantized:ibm-granite/granite-3.2-8b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-04T08:48:39Z" | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.2
- ganite
base_model:
- ibm-granite/granite-3.2-8b-instruct
---
# Granite-3.2-8B-Instruct
**Model Summary:**
Granite-3.2-8B-Instruct is an 8-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct), it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.
- **Developers:** Granite Team, IBM
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Release Date**: February 26th, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
**Intended Use:**
This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
**Capabilities**
* **Thinking**
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
* Long-context tasks including long document/meeting summarization, long document QA, etc.
**Generation:**
This is a simple example of how to use Granite-3.2-8B-Instruct model.
Install the following libraries:
```shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
```
Then, copy the snippet from the section that is relevant for your use case.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
model_path="ibm-granite/granite-3.2-8b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path
)
conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
set_seed(42)
output = model.generate(
**input_ids,
max_new_tokens=8192,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)
```
**Example Outputs**
- thinking=True
```md
Here is my thought process:
1. We know we start with 10 liters of a 30% acid solution, so there's 3 liters of pure acid in it (10 liters * 0.3).
2. We need to determine how much of the 70% acid solution to add such that the final mixture is 50% acid.
3. Let's say x liters of the 70% acid solution are added. This will contribute 0.7x liters of pure acid.
4. The total volume of the new mixture will be 10 + x liters, and the total amount of pure acid will be 3 + 0.7x liters.
5. We need to set up an equation where (3 + 0.7x) / (10 + x) = 0.5 (since we want a 50% acid mixture).
6. Solving this equation for x will give us the volume of 70% solution needed.
Here is my response:
To solve this, we need to set up an equation based on the concentration of acid before and after mixing.
1. **Initial Acid Content**:
- We start with 10 liters of a 30% acid solution.
- Pure acid in this = 10 * 0.3 = 3 liters.
2. **Variable for Addition**:
- Let x be the liters of 70% acid solution to be added.
- Pure acid in x liters of 70% solution = 0.7x liters.
3. **Final Mixture Concentration**:
- Total volume of mixture = 10 + x liters.
- Total pure acid in the mixture = 3 (from initial solution) + 0.7x (from added solution).
4. **Setting up the equation for 50% acid concentration**:
- We require that the ratio of pure acid to total volume is 0.5 (50%).
- Therefore, the equation is:
\[
\frac{3 + 0.7x}{10 + x} = 0.5
\]
5. **Solving the equation**:
Multiply both sides by (10 + x) to clear the fraction:
\[
3 + 0.7x = 0.5(10 + x)
\]
Expanding the right side:
\[
3 + 0.7x = 5 + 0.5x
\]
Subtract 0.5x from both sides:
\[
3 + 0.2x = 5
\]
Subtract 3 from both sides:
\[
0.2x = 2
\]
Divide by 0.2:
\[
x = 10
\]
So, 10 liters of the 70% acid solution must be added to the initial 10 liters of 30% solution to get a 50% acid mixture.
Let's verify:
- Initial acid: 3 liters
- Acid from added solution: 0.7 * 10 = 7 liters
- Total acid: 3 + 7 = 10 liters
- Total volume: 10 + 10 = 20 liters
- Acid percentage: \(\frac{10}{20} = 0.5 = 50\%\)
This confirms our solution.
Final answer: 10 liters of 70% acid solution should be added.
```
- thinking=False
```md
Let's denote the amount of 70% acid solution to be added as x liters.
We start with 10 liters of a 30% acid solution, which means there are 0.3 * 10 = 3 liters of pure acid in it.
We want to end up with a 50% acid solution after adding x liters of 70% acid solution. The total volume of the mixture will be 10 + x liters, and we want half of this volume to be pure acid.
So, the amount of pure acid in the final mixture should be 0.5 * (10 + x) liters.
The amount of pure acid in the final mixture comes from two sources: the original 30% solution and the added 70% solution. So, we can set up the following equation:
3 (from the original solution) + 0.7x (from the added solution) = 0.5 * (10 + x)
Now, let's solve for x:
3 + 0.7x = 5 + 0.5x
0.7x - 0.5x = 5 - 3
0.2x = 2
x = 2 / 0.2
x = 10
So, you need to add 10 liters of a 70% acid solution to the 10 liters of a 30% acid solution to get a 50% acid mixture.
```
**Evaluation Results:**
<table>
<thead>
<tr>
<th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
<th style="text-align:center; background-color: #001d6c; color: white;">ArenaHard</th>
<th style="text-align:center; background-color: #001d6c; color: white;">Alpaca-Eval-2</th>
<th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th>
<th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard</th>
<th style="text-align:center; background-color: #001d6c; color: white;">DROP</th>
<th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th>
<th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th>
</tr></thead>
<tbody>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Llama-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">36.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">27.22</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">69.15</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">52.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">72.66</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">61.48</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.24</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.32</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">80.15</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.10</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.43</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Llama-8B</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">17.17</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">21.85</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">45.80</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">13.25</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">47.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.71</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">44.46</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">72.18</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">67.54</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">62.91</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.50</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.87</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Qwen-2.5-7B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">25.44</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">74.30</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">18.12</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">63.06</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">70.40</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">54.71</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">84.46</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">93.35</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">89.91</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">74.90</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">81.90</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Qwen-7B</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">10.36</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">15.35</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.72</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">9.94</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">47.14</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.04</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.76</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">78.47</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.89</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">78.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">59.10</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.45</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">37.58</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.77</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.7</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.84</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">68.55</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.78</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.15</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">89.63</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">73.20</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.73</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-2B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">23.3</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">27.17</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">57.11</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">20.55</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">59.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">54.46</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">18.68</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">67.55</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.45</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">75.26</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">63.59</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">84.7</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.2-2B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">24.86</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">34.51</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">57.18</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">20.56</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">59.8</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">52.27</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">21.12</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">67.02</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">80.13</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">73.39</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">61.55</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.23</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.2-8B-Instruct</b></td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">55.25</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">61.19</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.04</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.92</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">64.77</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.95</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">81.65</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">89.35</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.72</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">74.31</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.42</td>
</tr>
</tbody></table>
**Training Data:**
Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.
<!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). -->
**Infrastructure:**
We train Granite-3.2-8B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
**Ethical Considerations and Limitations:**
Granite-3.2-8B-Instruct builds upon Granite-3.1-8B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) remain relevant.
**Resources**
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
<!-- ## Citation
```
@misc{granite-models,
author = {author 1, author2, ...},
title = {},
journal = {},
volume = {},
year = {2024},
url = {https://arxiv.org/abs/0000.00000},
}
``` --> |
SteveImmanuel/ViTMAE-muc-streetview | SteveImmanuel | "2024-04-26T06:03:43Z" | 61 | 0 | transformers | [
"transformers",
"safetensors",
"vit_mae",
"pretraining",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-26T06:02:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bullerwins/QVQ-72B-Preview-exl2_6.0bpw | bullerwins | "2024-12-26T09:00:50Z" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"chat",
"conversational",
"en",
"arxiv:2409.12191",
"base_model:Qwen/QVQ-72B-Preview",
"base_model:quantized:Qwen/QVQ-72B-Preview",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | image-text-to-text | "2024-12-26T08:45:51Z" | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/QVQ-72B-Preview/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: Qwen/QVQ-72B-Preview
tags:
- chat
library_name: transformers
---
# QVQ-72B-Preview
## Introduction
**QVQ-72B-Preview** is an experimental research model developed by the Qwen team, focusing on enhancing visual reasoning capabilities.
## Performance
| | **QVQ-72B-Preview** | o1-2024-12-17 | gpt-4o-2024-05-13 | Claude3.5 Sonnet-20241022 | Qwen2VL-72B |
|----------------|-----------------|---------------|-------------------|----------------------------|-------------|
| MMMU(val) | 70.3 | 77.3 | 69.1 | 70.4 | 64.5 |
| MathVista(mini) | 71.4 | 71.0 | 63.8 | 65.3 | 70.5 |
| MathVision(full) | 35.9 | – | 30.4 | 35.6 | 25.9 |
| OlympiadBench | 20.4 | – | 25.9 | – | 11.2 |
**QVQ-72B-Preview** has achieved remarkable performance on various benchmarks. It scored a remarkable 70.3% on the Multimodal Massive Multi-task Understanding (MMMU) benchmark, showcasing QVQ's powerful ability in multidisciplinary understanding and reasoning. Furthermore, the significant improvements on MathVision highlight the model's progress in mathematical reasoning tasks. OlympiadBench also demonstrates the model's enhanced ability to tackle challenging problems.
***But It's Not All Perfect: Acknowledging the Limitations***
While **QVQ-72B-Preview** exhibits promising performance that surpasses expectations, it’s important to acknowledge several limitations:
1. **Language Mixing and Code-Switching:** The model might occasionally mix different languages or unexpectedly switch between them, potentially affecting the clarity of its responses.
2. **Recursive Reasoning Loops:** There's a risk of the model getting caught in recursive reasoning loops, leading to lengthy responses that may not even arrive at a final answer.
3. **Safety and Ethical Considerations:** Robust safety measures are needed to ensure reliable and safe performance. Users should exercise caution when deploying this model.
4. **Performance and Benchmark Limitations:** Despite the improvements in visual reasoning, QVQ doesn’t entirely replace the capabilities of Qwen2-VL-72B. During multi-step visual reasoning, the model might gradually lose focus on the image content, leading to hallucinations. Moreover, QVQ doesn’t show significant improvement over Qwen2-VL-72B in basic recognition tasks like identifying people, animals, or plants.
Note: Currently, the model only supports single-round dialogues and image outputs. It does not support video inputs.
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/QVQ-72B-Preview", torch_dtype="auto", device_map="auto"
)
# default processer
processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}
],
},
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/demo.png",
},
{"type": "text", "text": "What value should be filled in the blank space?"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=8192)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qvq-72b-preview,
title = {QVQ: To See the World with Wisdom},
url = {https://qwenlm.github.io/blog/qvq-72b-preview/},
author = {Qwen Team},
month = {December},
year = {2024}
}
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
``` |
e-n-v-y/envy-zoom-slider-xl-01 | e-n-v-y | "2023-11-26T21:52:45Z" | 57 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"anime",
"concept",
"full body",
"portrait",
"close-up",
"photo",
"woman",
"man",
"realistic",
"closeup",
"close up",
"fullbody",
"zoom",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-11-26T21:52:44Z" | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- anime
- concept
- full body
- portrait
- close-up
- photo
- woman
- man
- realistic
- closeup
- close up
- fullbody
- zoom
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text: 'rough digital painting, (full body:1.2), 1boy, man, handsome, solo, [:fantastical costume design,:0.2] warlord, chinese, pink hair, (solid:1) build, natural sunlight, interior, in a Palewave Amazonian tribal village'
output:
url: >-
3949708.jpeg
---
# Envy Zoom Slider XL 01
<Gallery />
## Model description
<p>Zooms in and out on character portraits. I've tried to balance it so that 1.0 is zoomed in close to the character's face, and -1.0 is a full body shot, however, your results may vary depending on your subject and what checkpoint you're using. I've had generally good luck up to +/-1.25 and even more in some cases.</p><p>This works with both anime and photorealistic generations. It was trained with humans in mind, and has not been tested with animals or scenery, so YMMV.</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/e-n-v-y/envy-zoom-slider-xl-01/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('e-n-v-y/envy-zoom-slider-xl-01', weight_name='EnvyZoomSliderXL01.safetensors')
image = pipeline('rough digital painting, (full body:1.2), 1boy, man, handsome, solo, [:fantastical costume design,:0.2] warlord, chinese, pink hair, (solid:1) build, natural sunlight, interior, in a Palewave Amazonian tribal village').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
nateraw/bert-base-uncased-ag-news | nateraw | "2021-09-22T09:28:21Z" | 26 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"ag_news",
"en",
"dataset:ag_news",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- ag_news
- pytorch
license: mit
datasets:
- ag_news
metrics:
- accuracy
---
# bert-base-uncased-ag-news
## Model description
`bert-base-uncased` finetuned on the AG News dataset using PyTorch Lightning. Sequence length 128, learning rate 2e-5, batch size 32, 4 T4 GPUs, 4 epochs. [The code can be found here](https://github.com/nateraw/hf-text-classification)
#### Limitations and bias
- Not the best model...
## Training data
Data came from HuggingFace's `datasets` package. The data can be viewed [on nlp viewer](https://huggingface.co/nlp/viewer/?dataset=ag_news).
## Training procedure
...
## Eval results
... |
di-RECT-Sophie-Rain-Spiderman-Update-Leaks/Sophie.Rain.Spider-Man.Official.New.Video.Tutorial | di-RECT-Sophie-Rain-Spiderman-Update-Leaks | "2025-03-23T18:16:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-23T18:16:08Z" | <!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div> |
sumo43/zephyr-7b-sft-full | sumo43 | "2024-03-23T12:37:09Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bit_llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-21T00:24:07Z" | ---
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: zephyr-7b-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-full
This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.249 | 1.0 | 8969 | 2.2164 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.15.2
|
DanielVelaJ/bert-base-uncased-issues-128 | DanielVelaJ | "2023-03-22T02:32:46Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-03-22T01:18:50Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0986 | 1.0 | 291 | 1.6929 |
| 1.6401 | 2.0 | 582 | 1.4304 |
| 1.4881 | 3.0 | 873 | 1.3916 |
| 1.4 | 4.0 | 1164 | 1.3796 |
| 1.3416 | 5.0 | 1455 | 1.2012 |
| 1.2807 | 6.0 | 1746 | 1.2733 |
| 1.2396 | 7.0 | 2037 | 1.2646 |
| 1.1993 | 8.0 | 2328 | 1.2098 |
| 1.1661 | 9.0 | 2619 | 1.1862 |
| 1.1406 | 10.0 | 2910 | 1.2223 |
| 1.1294 | 11.0 | 3201 | 1.2056 |
| 1.1042 | 12.0 | 3492 | 1.1655 |
| 1.0827 | 13.0 | 3783 | 1.2525 |
| 1.0738 | 14.0 | 4074 | 1.1685 |
| 1.0626 | 15.0 | 4365 | 1.1182 |
| 1.0629 | 16.0 | 4656 | 1.2456 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dwitidibyajyoti/layoutmlv3_thursday_sep7_v5 | dwitidibyajyoti | "2023-09-07T10:06:18Z" | 78 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-09-07T10:05:58Z" | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutmlv3_thursday_sep7_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutmlv3_thursday_sep7_v5
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1416
- Precision: 0.5517
- Recall: 0.9412
- F1: 0.6957
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 8.33 | 100 | 0.3243 | 0.5556 | 0.8824 | 0.6818 | 0.9485 |
| No log | 16.67 | 200 | 0.1584 | 0.6087 | 0.8235 | 0.7 | 0.9734 |
| No log | 25.0 | 300 | 0.1682 | 0.5517 | 0.9412 | 0.6957 | 0.9769 |
| No log | 33.33 | 400 | 0.1773 | 0.4545 | 0.8824 | 0.6 | 0.9734 |
| 0.2633 | 41.67 | 500 | 0.1631 | 0.4375 | 0.8235 | 0.5714 | 0.9751 |
| 0.2633 | 50.0 | 600 | 0.1526 | 0.5517 | 0.9412 | 0.6957 | 0.9769 |
| 0.2633 | 58.33 | 700 | 0.1430 | 0.5517 | 0.9412 | 0.6957 | 0.9840 |
| 0.2633 | 66.67 | 800 | 0.1497 | 0.5517 | 0.9412 | 0.6957 | 0.9822 |
| 0.2633 | 75.0 | 900 | 0.1418 | 0.5517 | 0.9412 | 0.6957 | 0.9805 |
| 0.0111 | 83.33 | 1000 | 0.1416 | 0.5517 | 0.9412 | 0.6957 | 0.9822 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nlokam99/ada_sample_2 | nlokam99 | "2022-06-12T17:40:42Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-06-12T17:38:56Z" | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
--- |
kanze2/my_awesome_model | kanze2 | "2025-03-30T15:19:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-30T15:19:32Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
startanalytics/autotrain-melanoma-vit-v1 | startanalytics | "2025-03-28T15:20:29Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"autotrain",
"base_model:UnipaPolitoUnimore/vit-large-patch32-384-melanoma",
"base_model:finetune:UnipaPolitoUnimore/vit-large-patch32-384-melanoma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-03-28T12:54:40Z" |
---
tags:
- autotrain
- transformers
- image-classification
base_model: UnipaPolitoUnimore/vit-large-patch32-384-melanoma
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.4700036942958832
f1_macro: 0.8440663933220411
f1_micro: 0.9007633587786259
f1_weighted: 0.8993592040241694
precision_macro: 0.8985460933094256
precision_micro: 0.9007633587786259
precision_weighted: 0.9004997984022535
recall_macro: 0.8042197414881518
recall_micro: 0.9007633587786259
recall_weighted: 0.9007633587786259
accuracy: 0.9007633587786259
|
shaneali2k04/videomae-base-finetuned-ucf101-subset | shaneali2k04 | "2024-11-30T08:34:27Z" | 66 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-11-27T21:09:36Z" | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3532
- Accuracy: 0.8903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2945 | 0.25 | 150 | 1.1367 | 0.5143 |
| 0.8621 | 1.25 | 300 | 0.7921 | 0.7857 |
| 0.2108 | 2.25 | 450 | 0.5836 | 0.7857 |
| 0.0447 | 3.25 | 600 | 0.3807 | 0.8714 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ccore/opt-1.3b-open-data-understanding | ccore | "2023-09-28T22:26:36Z" | 143 | 1 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"qa",
"open data",
"opt-1.3b",
"dataset:ccore/open_data_understanding",
"base_model:facebook/opt-1.3b",
"base_model:finetune:facebook/opt-1.3b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-27T12:00:17Z" | ---
license: other
base_model: facebook/opt-1.3b
tags:
- generated_from_trainer
- qa
- open data
- opt
- opt-1.3b
metrics:
- accuracy
widget:
- text: |-
# [PAPER]
Pope John Paul II (Latin: Ioannes Paulus II; Italian: Giovanni Paolo II; Polish: Jan Paweł II; born Karol Józef Wojtyła [ˈkarɔl ˈjuzɛv vɔjˈtɨwa];[b] 18 May 1920 – 2 April 2005) was head of the Catholic Church and sovereign of the Vatican City State from 1978 until his death in 2005. He was later canonised as Pope Saint John Paul II. In his youth, Wojtyła dabbled in stage acting. He graduated with excellent grades from an all-boys high school in Wadowice, Poland, shortly before the start of World War II in 1938. During the war, to avoid being kidnapped and sent off to a German slave labor camp, he signed up for work in harsh conditions in a quarry. Wojtyła eventually took up acting and developed a love for the profession and participated at a local theater. The linguistically skilled Wojtyła wanted to study Polish at university. Encouraged by a conversation with Adam Stefan Sapieha, he decided to study theology and become a priest. Eventually, Wojtyła rose to the position of Archbishop of Kraków and then a cardinal, both positions held by his mentor. Wojtyła was elected pope on the third day of the second papal conclave of 1978 (becoming one of the youngest popes in history), which was called after John Paul I, who had been elected in the first papal conclave of 1978 earlier in August to succeed Pope Paul VI, died after 33 days. Wojtyła adopted the name of his predecessor in tribute to him.[20] John Paul II was the first non-Italian pope since Adrian VI in the 16th century, as well as the third-longest-serving pope in history after Pius IX and St. Peter. John Paul II attempted to improve the Catholic Church's relations with Judaism, Islam, and the Eastern Orthodox Church in the spirit of ecumenism, holding atheism as the greatest threat. He maintained the Church's previous positions on such matters as abortion, artificial contraception, the ordination of women, and a celibate clergy, and although he supported the reforms of the Second Vatican Council, he was seen as generally conservative in their interpretation.[21][22] He put emphasis on family and identity, while questioning consumerism, hedonism and the pursuit of wealth. He was one of the most travelled world leaders in history, visiting 129 countries during his pontificate. As part of his special emphasis on the universal call to holiness, he beatified 1,344,[23] and also canonised 483 people, more than the combined tally of his predecessors during the preceding five centuries. By the time of his death, he had named most of the College of Cardinals, consecrated or co-consecrated many of the world's bishops, and ordained many priests.[24] He has been credited with fighting against dictatorships for democracy and with helping to end Communist rule in his native Poland and the rest of Europe.[25] Under John Paul II, the Catholic Church greatly expanded its influence in Africa and Latin America, and retained its influence in Europe and the rest of the world.
## [UNDERSTANDING]
This section presents a brief account
datasets:
- ccore/open_data_understanding
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OPT_1.3b_open_data_understanding
## Description
This model has been trained to understand and respond to any content inserted after the `[PAPER]` tag. It uses advanced language modeling techniques to understand the context, structure, and underlying goals of the input text.
## How to use
To interact with this template, place your text after the `[PAPER]` tag. The model will process the text and respond accordingly. For example:
[PAPER]
Your text here...
## Example
[PAPER]
We present a scalable method to build a high-quality instruction-following language model...
The model will understand and respond to your text according to its context and content.
## Comprehension Sections
### [UNDERSTANDING]
This section provides a detailed analysis and decomposition of the inserted text, facilitating the understanding of the content.
### [QUESTIONS AND ANSWERS]
This section addresses questions and answers that could arise based on the text provided.
### [OBJECTION AND REPLY]
This section addresses any objections and responses that could arise from analysis of the text.
## Warnings
- This model was trained on a diverse corpus, but may still have bias or limitations.
- Continuous validation of the model and its output is essential.
## Contact and Support
For more information, visit [Hugging Face](https://huggingface.co/). |
glif-loradex-trainer/chrysolite_Neoclassiclite | glif-loradex-trainer | "2024-11-02T03:36:51Z" | 20 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | "2024-11-02T03:36:19Z" | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730518521704__000001500_0.jpg
text: Waterfall, Neoclassical, Neoclassicism, oil painting, Neoclassiclite
- output:
url: samples/1730518546249__000001500_1.jpg
text: Mountain, Landscape, Neoclassical, Neoclassicism, oil painting, Neoclassiclite
- output:
url: samples/1730518570788__000001500_2.jpg
text: Mountain, lake, landscape, Neoclassical, Neoclassicism, oil painting, Neoclassiclite
base_model: black-forest-labs/FLUX.1-dev
trigger: Neoclassical, Neoclassicism, oil painting, Neoclassiclite
instance_prompt: Neoclassical, Neoclassicism, oil painting, Neoclassiclite
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Neoclassiclite
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `chrysolite`.
<Gallery />
## Trigger words
You should use `Neoclassical, Neoclassicism, oil painting, Neoclassiclite` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/chrysolite_Neoclassiclite/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Luongdzung/bloomVN-0.5B-ppo-sft-order2-geo-his-lit-lora | Luongdzung | "2025-03-04T02:51:34Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Luongdzung/bloomVN-0.5B-ppo-sft-order2-geo-his-lora-ALL-WEIGHT",
"base_model:adapter:Luongdzung/bloomVN-0.5B-ppo-sft-order2-geo-his-lora-ALL-WEIGHT",
"region:us"
] | null | "2025-03-04T02:51:31Z" | ---
library_name: peft
base_model: Luongdzung/bloomVN-0.5B-ppo-sft-order2-geo-his-lora-ALL-WEIGHT
tags:
- generated_from_trainer
model-index:
- name: bloomVN-0.5B-ppo-sft-order2-geo-his-lit-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomVN-0.5B-ppo-sft-order2-geo-his-lit-lora
This model is a fine-tuned version of [Luongdzung/bloomVN-0.5B-ppo-sft-order2-geo-his-lora-ALL-WEIGHT](https://huggingface.co/Luongdzung/bloomVN-0.5B-ppo-sft-order2-geo-his-lora-ALL-WEIGHT) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 |
monshinawatra/llama-3.1-8b-wangchanX-instruct | monshinawatra | "2024-07-27T17:20:45Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-27T16:54:49Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** monshinawatra
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
maxseats/SungBeom-whisper-small-ko-set31 | maxseats | "2024-07-03T18:05:42Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-recognition",
"ko",
"dataset:maxseats/aihub-464-preprocessed-680GB-set-31",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-03T18:05:20Z" |
---
language: ko
tags:
- whisper
- speech-recognition
datasets:
- maxseats/aihub-464-preprocessed-680GB-set-31
metrics:
- cer
---
# Model Name : maxseats/SungBeom-whisper-small-ko-set30
# Description
- 파인튜닝 데이터셋 : maxseats/aihub-464-preprocessed-680GB-set-31
# 설명
- AI hub의 주요 영역별 회의 음성 데이터셋을 학습 중이에요.
- 680GB 중 set_0~30 데이터(310GB)까지 파인튜닝한 모델을 불러와서, set_31 데이터(10GB)를 학습한 모델입니다.
- 링크 : https://huggingface.co/datasets/maxseats/aihub-464-preprocessed-680GB-set-31
|
BroAlanTaps/GPT2-large-4-66000steps | BroAlanTaps | "2024-10-18T15:16:38Z" | 138 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-18T15:14:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GeorgeDaDude/jb_sytem_bin_judge_base_wmdo | GeorgeDaDude | "2024-05-27T09:21:42Z" | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-27T08:25:32Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: jb_sytem_bin_judge_base_wmdo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jb_sytem_bin_judge_base_wmdo
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4177
- Accuracy: 0.9052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4508 | 1.0 | 1281 | 0.4957 | 0.8718 |
| 0.3523 | 2.0 | 2562 | 0.3817 | 0.8894 |
| 0.2394 | 3.0 | 3843 | 0.3685 | 0.8903 |
| 0.1632 | 4.0 | 5124 | 0.4015 | 0.9034 |
| 0.411 | 5.0 | 6405 | 0.4177 | 0.9052 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
aarashfeizi/jean-francois-godbout-batch2-repeats3-rank16-snrNone | aarashfeizi | "2024-04-29T18:02:20Z" | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-04-29T17:15:06Z" | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- diffusers-training
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'A photo of <s0><s1> giving a speech'
output:
url:
"image_0.png"
- text: 'A photo of <s0><s1> giving a speech'
output:
url:
"image_1.png"
- text: 'A photo of <s0><s1> giving a speech'
output:
url:
"image_2.png"
- text: 'A photo of <s0><s1> giving a speech'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - aarashfeizi/jean-francois-godbout-batch2-repeats3-rank16-snrNone
<Gallery />
## Model description
### These are aarashfeizi/jean-francois-godbout-batch2-repeats3-rank16-snrNone LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch2-repeats3-rank16-snrNone.safetensors` here 💾](/aarashfeizi/jean-francois-godbout-batch2-repeats3-rank16-snrNone/blob/main//home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch2-repeats3-rank16-snrNone.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch2-repeats3-rank16-snrNone:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch2-repeats3-rank16-snrNone_emb.safetensors` here 💾](/aarashfeizi/jean-francois-godbout-batch2-repeats3-rank16-snrNone/blob/main//home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch2-repeats3-rank16-snrNone_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch2-repeats3-rank16-snrNone_emb` to your prompt. For example, `A photo of /home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch2-repeats3-rank16-snrNone_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aarashfeizi/jean-francois-godbout-batch2-repeats3-rank16-snrNone', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='aarashfeizi/jean-francois-godbout-batch2-repeats3-rank16-snrNone', filename='/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch2-repeats3-rank16-snrNone_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('A photo of <s0><s1> giving a speech').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/aarashfeizi/jean-francois-godbout-batch2-repeats3-rank16-snrNone/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
maxwellwe/hf_llama_8b_fine_tuned_1000steps | maxwellwe | "2025-03-17T13:48:20Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-17T13:44:16Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** maxwellwe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sophie-Rain-Spiderman-Leaks-Video-Free/Sophie.Rain.Spiderman.Video.Instagram | Sophie-Rain-Spiderman-Leaks-Video-Free | "2025-03-07T18:00:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-07T18:00:53Z" | <!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div> |
KingNish/test-flux-lora | KingNish | "2024-09-10T14:20:11Z" | 452 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2024-09-10T14:20:00Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
parameters:
negative_prompt: ' '
output:
url: images/modiji -1.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Test
<Gallery />
## Model description
.
## Download model
Weights for this model are available in Safetensors format.
[Download](/KingNish/test-flux-lora/tree/main) them in the Files & versions tab.
|
t3PbMvBN6SXv/dqn-SpaceInvadersNoFrameskip-v4 | t3PbMvBN6SXv | "2023-06-24T18:39:47Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-24T18:39:31Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 575.50 +/- 129.04
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga t3PbMvBN6SXv -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga t3PbMvBN6SXv -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga t3PbMvBN6SXv
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g075 | jordyvl | "2023-07-06T16:04:38Z" | 102 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-06T13:56:22Z" | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g075
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g075
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2759
- Accuracy: 0.6825
- Exit 0 Accuracy: 0.11
- Exit 1 Accuracy: 0.155
- Exit 2 Accuracy: 0.345
- Exit 3 Accuracy: 0.425
- Exit 4 Accuracy: 0.5225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7601 | 0.1075 | 0.0825 | 0.0675 | 0.1025 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7328 | 0.1125 | 0.07 | 0.065 | 0.1225 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6968 | 0.13 | 0.075 | 0.06 | 0.1325 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6594 | 0.18 | 0.075 | 0.06 | 0.1175 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6206 | 0.1925 | 0.085 | 0.0575 | 0.11 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5710 | 0.2125 | 0.09 | 0.08 | 0.115 | 0.0625 | 0.0625 |
| No log | 6.72 | 14 | 2.5334 | 0.2275 | 0.095 | 0.08 | 0.12 | 0.0575 | 0.0625 |
| No log | 7.72 | 16 | 2.5094 | 0.245 | 0.095 | 0.095 | 0.135 | 0.0725 | 0.0625 |
| No log | 8.72 | 18 | 2.4631 | 0.2825 | 0.095 | 0.0975 | 0.17 | 0.0925 | 0.065 |
| No log | 9.72 | 20 | 2.4152 | 0.3025 | 0.1 | 0.1275 | 0.205 | 0.1075 | 0.0625 |
| No log | 10.72 | 22 | 2.3737 | 0.325 | 0.1075 | 0.1225 | 0.24 | 0.12 | 0.065 |
| No log | 11.72 | 24 | 2.3302 | 0.3175 | 0.1125 | 0.1175 | 0.2375 | 0.1475 | 0.0675 |
| No log | 12.72 | 26 | 2.2746 | 0.34 | 0.1125 | 0.125 | 0.255 | 0.155 | 0.095 |
| No log | 13.72 | 28 | 2.2527 | 0.35 | 0.1125 | 0.125 | 0.2625 | 0.175 | 0.095 |
| No log | 14.72 | 30 | 2.2101 | 0.3425 | 0.1075 | 0.13 | 0.27 | 0.2125 | 0.095 |
| No log | 15.72 | 32 | 2.1811 | 0.355 | 0.1075 | 0.14 | 0.29 | 0.24 | 0.095 |
| No log | 16.72 | 34 | 2.1368 | 0.38 | 0.105 | 0.145 | 0.305 | 0.245 | 0.0925 |
| No log | 17.72 | 36 | 2.0855 | 0.395 | 0.1075 | 0.145 | 0.3175 | 0.2475 | 0.095 |
| No log | 18.72 | 38 | 2.0559 | 0.4 | 0.1125 | 0.145 | 0.305 | 0.255 | 0.1025 |
| No log | 19.72 | 40 | 2.0277 | 0.41 | 0.115 | 0.145 | 0.295 | 0.28 | 0.105 |
| No log | 20.72 | 42 | 1.9746 | 0.445 | 0.12 | 0.145 | 0.28 | 0.2875 | 0.1025 |
| No log | 21.72 | 44 | 1.9346 | 0.4525 | 0.12 | 0.145 | 0.265 | 0.25 | 0.1025 |
| No log | 22.72 | 46 | 1.8926 | 0.4925 | 0.12 | 0.145 | 0.255 | 0.25 | 0.11 |
| No log | 23.72 | 48 | 1.8581 | 0.5025 | 0.115 | 0.1475 | 0.26 | 0.3 | 0.1075 |
| No log | 24.72 | 50 | 1.8403 | 0.4975 | 0.11 | 0.1475 | 0.2725 | 0.3325 | 0.11 |
| No log | 25.72 | 52 | 1.8162 | 0.5 | 0.1125 | 0.1475 | 0.2875 | 0.3575 | 0.1125 |
| No log | 26.72 | 54 | 1.7562 | 0.5475 | 0.115 | 0.1475 | 0.295 | 0.3575 | 0.115 |
| No log | 27.72 | 56 | 1.7205 | 0.5725 | 0.1175 | 0.15 | 0.295 | 0.37 | 0.115 |
| No log | 28.72 | 58 | 1.7041 | 0.555 | 0.1175 | 0.15 | 0.295 | 0.36 | 0.11 |
| No log | 29.72 | 60 | 1.7018 | 0.5525 | 0.12 | 0.15 | 0.305 | 0.36 | 0.1125 |
| No log | 30.72 | 62 | 1.6532 | 0.58 | 0.12 | 0.15 | 0.3025 | 0.3725 | 0.1125 |
| No log | 31.72 | 64 | 1.6218 | 0.58 | 0.12 | 0.15 | 0.3125 | 0.3725 | 0.1175 |
| No log | 32.72 | 66 | 1.5888 | 0.59 | 0.115 | 0.1475 | 0.32 | 0.38 | 0.1325 |
| No log | 33.72 | 68 | 1.5778 | 0.6 | 0.115 | 0.1475 | 0.315 | 0.3875 | 0.1425 |
| No log | 34.72 | 70 | 1.5500 | 0.59 | 0.1225 | 0.15 | 0.315 | 0.3875 | 0.155 |
| No log | 35.72 | 72 | 1.5216 | 0.61 | 0.13 | 0.15 | 0.31 | 0.3875 | 0.17 |
| No log | 36.72 | 74 | 1.5024 | 0.6175 | 0.1275 | 0.15 | 0.3075 | 0.4125 | 0.1675 |
| No log | 37.72 | 76 | 1.4787 | 0.615 | 0.12 | 0.1525 | 0.32 | 0.4025 | 0.165 |
| No log | 38.72 | 78 | 1.4635 | 0.6175 | 0.1175 | 0.1525 | 0.325 | 0.4125 | 0.1625 |
| No log | 39.72 | 80 | 1.4455 | 0.6225 | 0.12 | 0.155 | 0.3225 | 0.4225 | 0.165 |
| No log | 40.72 | 82 | 1.4304 | 0.625 | 0.12 | 0.155 | 0.33 | 0.425 | 0.1675 |
| No log | 41.72 | 84 | 1.4170 | 0.6425 | 0.1175 | 0.155 | 0.3325 | 0.425 | 0.21 |
| No log | 42.72 | 86 | 1.4052 | 0.64 | 0.1175 | 0.155 | 0.335 | 0.4275 | 0.245 |
| No log | 43.72 | 88 | 1.3965 | 0.6425 | 0.1125 | 0.155 | 0.34 | 0.4125 | 0.2775 |
| No log | 44.72 | 90 | 1.3766 | 0.645 | 0.1125 | 0.155 | 0.3425 | 0.4075 | 0.315 |
| No log | 45.72 | 92 | 1.3611 | 0.6575 | 0.11 | 0.155 | 0.345 | 0.41 | 0.33 |
| No log | 46.72 | 94 | 1.3513 | 0.6575 | 0.11 | 0.155 | 0.3425 | 0.4175 | 0.34 |
| No log | 47.72 | 96 | 1.3520 | 0.665 | 0.11 | 0.155 | 0.3425 | 0.4275 | 0.36 |
| No log | 48.72 | 98 | 1.3373 | 0.67 | 0.11 | 0.155 | 0.3425 | 0.425 | 0.3875 |
| No log | 49.72 | 100 | 1.3213 | 0.6775 | 0.11 | 0.155 | 0.3425 | 0.4175 | 0.405 |
| No log | 50.72 | 102 | 1.3124 | 0.6825 | 0.11 | 0.155 | 0.3425 | 0.41 | 0.445 |
| No log | 51.72 | 104 | 1.3080 | 0.68 | 0.1075 | 0.155 | 0.34 | 0.41 | 0.455 |
| No log | 52.72 | 106 | 1.3037 | 0.675 | 0.105 | 0.1575 | 0.3425 | 0.4175 | 0.4775 |
| No log | 53.72 | 108 | 1.2987 | 0.6825 | 0.11 | 0.1575 | 0.345 | 0.425 | 0.4875 |
| No log | 54.72 | 110 | 1.2943 | 0.6775 | 0.1075 | 0.1575 | 0.3475 | 0.425 | 0.5025 |
| No log | 55.72 | 112 | 1.2889 | 0.68 | 0.1075 | 0.1575 | 0.3475 | 0.425 | 0.51 |
| No log | 56.72 | 114 | 1.2829 | 0.68 | 0.1075 | 0.16 | 0.3475 | 0.4225 | 0.5175 |
| No log | 57.72 | 116 | 1.2793 | 0.68 | 0.1075 | 0.155 | 0.3475 | 0.4225 | 0.5225 |
| No log | 58.72 | 118 | 1.2769 | 0.68 | 0.11 | 0.155 | 0.345 | 0.4225 | 0.5225 |
| No log | 59.72 | 120 | 1.2759 | 0.6825 | 0.11 | 0.155 | 0.345 | 0.425 | 0.5225 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Newvel/face_detection_model_output | Newvel | "2024-12-05T11:06:09Z" | 198 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:WinKawaks/vit-tiny-patch16-224",
"base_model:finetune:WinKawaks/vit-tiny-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-12-05T11:05:43Z" | ---
library_name: transformers
license: apache-2.0
base_model: WinKawaks/vit-tiny-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: face_detection_model_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# face_detection_model_output
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0 | 1.0 | 352 | 0.0000 | 1.0 |
| 0.0 | 2.0 | 704 | 0.0000 | 1.0 |
| 0.0 | 3.0 | 1056 | 0.0000 | 1.0 |
| 0.0 | 4.0 | 1408 | 0.0000 | 1.0 |
| 0.0 | 5.0 | 1760 | 0.0000 | 1.0 |
| 0.0 | 6.0 | 2112 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
sonyashijin/tinyllama_100_hippo_20k_seed_0.2 | sonyashijin | "2024-12-15T18:39:42Z" | 138 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-15T18:39:03Z" | ---
base_model: unsloth/tinyllama-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sonyashijin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
recogna-nlp/phi-bode-2-ultraalpaca | recogna-nlp | "2025-03-07T20:28:02Z" | 442 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"phi",
"text-generation",
"custom_code",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-09T03:49:10Z" | ---
license: mit
model-index:
- name: phi-bode-2-ultraalpaca
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 38.35
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 25.17
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 29.61
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 45.39
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 24.43
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 43.97
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 54.15
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 54.59
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 43.34
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
---
# Phi-Bode
<!--- PROJECT LOGO -->
<p align="center">
<img src="https://huggingface.co/recogna-nlp/Phi-Bode/resolve/main/phi-bode.jpg" alt="Phi-Bode Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Phi-Bode é um modelo de linguagem ajustado para o idioma português, desenvolvido a partir do modelo base Phi-2B fornecido pela [Microsoft](https://huggingface.co/microsoft/phi-2). Este modelo foi refinado através do processo de fine-tuning utilizando o dataset UltraAlpaca. O principal objetivo deste modelo é ser viável para pessoas
que não possuem recursos computacionais disponíveis para o uso de LLMs (Large Language Models). Ressalta-se que este é um trabalho em andamento e o modelo ainda apresenta problemas na geração de texto em português.
## Características Principais
- **Modelo Base:** Phi-2B, criado pela Microsoft, com 2.7 bilhões de parâmetros.
- **Dataset para Fine-tuning:** [UltraAlpaca](https://huggingface.co/datasets/recogna-nlp/ultra-alpaca-ptbr)
- **Treinamento:** O treinamento foi realizado a partir do fine-tuning completo do phi-2.
# [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/recogna-nlp/phi-bode-2-ultraalpaca)
| Metric | Value |
|--------------------------|---------|
|Average |**39.89**|
|ENEM Challenge (No Images)| 38.35|
|BLUEX (No Images) | 25.17|
|OAB Exams | 29.61|
|Assin2 RTE | 45.39|
|Assin2 STS | 24.43|
|FaQuAD NLI | 43.97|
|HateBR Binary | 54.15|
|PT Hate Speech Binary | 54.59|
|tweetSentBR | 43.34|
|
p1gm1/bert-finetuned-squad | p1gm1 | "2023-11-21T17:45:53Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-11-17T18:14:21Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
skzxjus/Qwen2.5-7B-1m-Open-R1-Distill | skzxjus | "2025-02-06T03:16:27Z" | 8 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/Bespoke-Stratos-17k",
"base_model:Qwen/Qwen2.5-7B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct-1M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-03T21:06:42Z" | ---
base_model: Qwen/Qwen2.5-7B-Instruct-1M
datasets: HuggingFaceH4/Bespoke-Stratos-17k
library_name: transformers
model_name: Qwen2.5-7B-1m-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-1m-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) on the [HuggingFaceH4/Bespoke-Stratos-17k](https://huggingface.co/datasets/HuggingFaceH4/Bespoke-Stratos-17k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="skzxjus/Qwen2.5-7B-1m-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/llmsft/huggingface/runs/njo2ftcp)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Mazin100/q-FrozenLake-v1-4x4-noSlippery | Mazin100 | "2023-11-11T22:16:25Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-11T22:16:22Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Mazin100/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
daniel40/12afc1b4-2ce6-40a3-ad27-07f31e27ad9c | daniel40 | "2025-01-23T11:49:55Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"region:us"
] | null | "2025-01-23T11:33:23Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 12afc1b4-2ce6-40a3-ad27-07f31e27ad9c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f70ddae1849231d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f70ddae1849231d5_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/12afc1b4-2ce6-40a3-ad27-07f31e27ad9c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f70ddae1849231d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 626c7685-3b25-4cd2-a8a4-a8e58ec0f209
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 626c7685-3b25-4cd2-a8a4-a8e58ec0f209
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 12afc1b4-2ce6-40a3-ad27-07f31e27ad9c
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6643 | 0.0002 | 1 | 0.5939 |
| 0.6398 | 0.0005 | 3 | 0.5925 |
| 0.6221 | 0.0010 | 6 | 0.5751 |
| 0.5978 | 0.0015 | 9 | 0.5005 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF | tensorblock | "2025-01-01T04:16:01Z" | 984 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"abliterated",
"uncensored",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-01T02:51:05Z" | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterate/blob/main/LICENSE
language:
- en
base_model: huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- abliterated
- uncensored
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated - GGUF
This repo contains GGUF format model files for [huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q2_K.gguf) | Q2_K | 5.770 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q3_K_S.gguf) | Q3_K_S | 6.660 GB | very small, high quality loss |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q3_K_M.gguf) | Q3_K_M | 7.339 GB | very small, high quality loss |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q3_K_L.gguf) | Q3_K_L | 7.925 GB | small, substantial quality loss |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q4_0.gguf) | Q4_0 | 8.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q4_K_S.gguf) | Q4_K_S | 8.573 GB | small, greater quality loss |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q4_K_M.gguf) | Q4_K_M | 8.988 GB | medium, balanced quality - recommended |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q5_0.gguf) | Q5_0 | 10.267 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q5_K_S.gguf) | Q5_K_S | 10.267 GB | large, low quality loss - recommended |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q5_K_M.gguf) | Q5_K_M | 10.509 GB | large, very low quality loss - recommended |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q6_K.gguf) | Q6_K | 12.125 GB | very large, extremely low quality loss |
| [Qwen2.5-Coder-14B-Instruct-abliterated-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-abliterated-Q8_0.gguf) | Q8_0 | 15.702 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF --include "Qwen2.5-Coder-14B-Instruct-abliterated-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Qwen2.5-Coder-14B-Instruct-abliterated-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
sail-rvc/chanyeol | sail-rvc | "2023-07-14T07:36:25Z" | 3 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:35:54Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# chanyeol
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:36:24
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
MaziyarPanahi/ruadapt_qwen2.5_3B_ext_u48_instruct_v4-GGUF | MaziyarPanahi | "2024-10-30T01:38:33Z" | 69 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4",
"base_model:quantized:RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4",
"region:us",
"conversational"
] | text-generation | "2024-10-30T01:29:00Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: ruadapt_qwen2.5_3B_ext_u48_instruct_v4-GGUF
base_model: RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4
inference: false
model_creator: RefalMachine
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/ruadapt_qwen2.5_3B_ext_u48_instruct_v4-GGUF](https://huggingface.co/MaziyarPanahi/ruadapt_qwen2.5_3B_ext_u48_instruct_v4-GGUF)
- Model creator: [RefalMachine](https://huggingface.co/RefalMachine)
- Original model: [RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4](https://huggingface.co/RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4)
## Description
[MaziyarPanahi/ruadapt_qwen2.5_3B_ext_u48_instruct_v4-GGUF](https://huggingface.co/MaziyarPanahi/ruadapt_qwen2.5_3B_ext_u48_instruct_v4-GGUF) contains GGUF format model files for [RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4](https://huggingface.co/RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
eramth/flux-lora-collection | eramth | "2025-03-23T09:03:54Z" | 0 | 0 | null | [
"text-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | "2025-02-17T14:08:35Z" | ---
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
license: mit
---
# Poison
- Character, Protrait
- No Triggers
 |
hafidikhsan/Wav2vec2-large-robust-Pronounciation-Evaluation | hafidikhsan | "2023-06-26T16:15:45Z" | 297 | 5 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-06-26T16:11:43Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Wav2vec2-large-robust-Pronounciation-Evaluation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2vec2-large-robust-Pronounciation-Evaluation
This model is a fine-tuned version of [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7846
- Accuracy: 0.72
- F1: 0.72
- Precision: 0.72
- Recall: 0.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 100
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:---------:|:------:|
| 0.7468 | 1.0 | 500 | 0.9762 | 0.616 | 0.616 | 0.616 | 0.616 |
| 0.492 | 2.0 | 1000 | 1.1308 | 0.536 | 0.536 | 0.536 | 0.536 |
| 0.619 | 3.0 | 1500 | 0.7913 | 0.688 | 0.688 | 0.688 | 0.688 |
| 0.56 | 4.0 | 2000 | 0.8142 | 0.67 | 0.67 | 0.67 | 0.67 |
| 0.4561 | 5.0 | 2500 | 0.7452 | 0.708 | 0.708 | 0.708 | 0.708 |
| 0.5474 | 6.0 | 3000 | 0.7846 | 0.72 | 0.72 | 0.72 | 0.72 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chimpster/results | chimpster | "2024-12-18T14:44:50Z" | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-18T14:05:51Z" | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
language:
- en
base_model:
- FacebookAI/roberta-base
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0060
- Accuracy: 0.9541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 224
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 113 | 0.0060 | 0.9531 |
| No log | 2.0 | 226 | 0.0060 | 0.9527 |
| No log | 3.0 | 339 | 0.0061 | 0.9504 |
| No log | 4.0 | 452 | 0.0060 | 0.9541 |
| 0.0007 | 5.0 | 565 | 0.0062 | 0.9541 |
| 0.0007 | 6.0 | 678 | 0.0060 | 0.9547 |
| 0.0007 | 7.0 | 791 | 0.0061 | 0.9541 |
| 0.0007 | 8.0 | 904 | 0.0061 | 0.9537 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3 |
abhayesian/LLama2_HarmBench_LAT | abhayesian | "2024-05-17T02:54:38Z" | 0 | 1 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-17T02:54:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s-himmi/dqn-SpaceInvadersNoFrameskip-v4-classic | s-himmi | "2023-02-06T21:11:38Z" | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-06T21:10:55Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 621.50 +/- 255.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga s-himmi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga s-himmi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga s-himmi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ayoolaolafenwa/Masked-Language-Model | ayoolaolafenwa | "2022-10-28T19:43:20Z" | 10 | 4 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-10-24T06:52:39Z" | This is a masked language model that was trained on IMDB dataset using a finetuned DistilBERT model.
### Note:
I published a tutorial explaining how transformers work and how to train a masked language model using transformer. https://olafenwaayoola.medium.com/the-concept-of-transformers-and-training-a-transformers-model-45a09ae7fb50
# Rest API Code for Testing the Masked Language Model
Inference API python code for testing the masked language model.
``` python
import requests
API_URL = "https://api-inference.huggingface.co/models/ayoolaolafenwa/Masked-Language-Model"
headers = {"Authorization": "Bearer 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"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "Washington DC is the [MASK] of USA.",
})
print(output[0]["sequence"])
```
Output
```
washington dc is the capital of usa.
```
It produces the correct output, *washington dc is the capital of usa.*
## Load the Masked Language Model with Transformers
You can easily load the Language model with transformers using this code.
``` python
from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("ayoolaolafenwa/Masked-Language-Model")
model = AutoModelForMaskedLM.from_pretrained("ayoolaolafenwa/Masked-Language-Model")
inputs = tokenizer("The internet [MASK] amazing.", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
output = tokenizer.decode(predicted_token_id)
print(output)
```
Output
```
is
```
It prints out the predicted masked word *is*. |
ShynBui/my_awesome_model | ShynBui | "2023-11-29T13:35:38Z" | 64 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-27T18:36:35Z" | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_keras_callback
model-index:
- name: ShynBui/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ShynBui/my_awesome_model
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0041
- Validation Loss: 0.0044
- Train Accuracy: 0.9984
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20544, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.0408 | 0.0113 | 0.9961 | 0 |
| 0.0113 | 0.0124 | 0.9965 | 1 |
| 0.0041 | 0.0044 | 0.9984 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MosRat/Gex_V1 | MosRat | "2025-03-28T08:12:13Z" | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gex",
"text-generation",
"conversational",
"custom_code",
"zh",
"en",
"dataset:MosRat2333/ZhEn-latex-ocr",
"dataset:MixTex/Pseudo-Latex-ZhEn-1",
"dataset:linxy/LaTeX_OCR",
"dataset:unsloth/LaTeX_OCR",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2025-03-28T07:54:42Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- MosRat2333/ZhEn-latex-ocr
- MixTex/Pseudo-Latex-ZhEn-1
- linxy/LaTeX_OCR
- unsloth/LaTeX_OCR
language:
- zh
- en
base_model:
- stepfun-ai/GOT-OCR2_0
- Qwen/Qwen2.5-1.5B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF | Triangle104 | "2025-02-14T11:01:08Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"reasoning",
"r1",
"vllm",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"base_model:quantized:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-14T10:57:17Z" | ---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- reasoning
- r1
- vllm
- llama-cpp
- gguf-my-repo
base_model: NousResearch/DeepHermes-3-Llama-3-8B-Preview
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: What is the meaning of life?
library_name: transformers
model-index:
- name: DeepHermes-3-Llama-3.1-8B
results: []
---
# Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF
This model was converted to GGUF format from [`NousResearch/DeepHermes-3-Llama-3-8B-Preview`](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview) for more details on the model.
---
DeepHermes 3 Preview is the latest version of our flagship Hermes
series of LLMs by Nous Research, and one of the first models in the
world to unify Reasoning (long chains of thought that improve answer
accuracy) and normal LLM response modes into one model. We have also
improved LLM annotation, judgement, and function calling.
DeepHermes 3 Preview is one of the first LLM models to unify both "intuitive", traditional mode responses and long chain of thought reasoning responses into a single model, toggled by a system prompt.
Hermes 3, the predecessor of DeepHermes 3, is a generalist language
model with many improvements over Hermes 2, including advanced agentic
capabilities, much better roleplaying, reasoning, multi-turn
conversation, long context coherence, and improvements across the board.
The ethos of the Hermes series of models is focused on aligning LLMs
to the user, with powerful steering capabilities and control given to
the end user.
This is a preview Hermes with early reasoning capabilities,
distilled from R1 across a variety of tasks that benefit from reasoning
and objectivity. Some quirks may be discovered! Please let us know any
interesting findings or issues you discover!
Note: To toggle REASONING ON, you must use the following system prompt:
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/DeepHermes-3-Llama-3-8B-Preview-Q4_K_S-GGUF --hf-file deephermes-3-llama-3-8b-preview-q4_k_s.gguf -c 2048
```
|
llmware/bling-stablelm-3b-gguf | llmware | "2024-03-01T09:46:47Z" | 26 | 6 | transformers | [
"transformers",
"gguf",
"stablelm_epoch",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-02-26T09:25:09Z" | ---
license: cc-by-sa-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
**bling-stablelm-3b-gguf** is a quantized version of BLING Stable-LM 3B, with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.
[**bling-stablelm-3b**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0) is a fact-based question-answering model, optimized for complex business documents.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/bling-stablelm-3b-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
model = ModelCatalog().load_model("llmware/bling-stablelm-3b-gguf")
response = model.inference(query, add_context=text_sample, add_prompt_engineering="default_with_context")
Note: please review [**config.json**](https://huggingface.co/llmware/bling-stablelm-3b-gguf/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** GGUF
- **Language(s) (NLP):** English
- **License:** CC-BY-SA-4.0
- **Quantized from model:** [llmware/bling-stablelm-3b](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0/)
## Model Card Contact
Darren Oberst & llmware team |
NITINNANNAPANENI/Ll | NITINNANNAPANENI | "2023-03-18T13:59:26Z" | 0 | 0 | null | [
"code",
"en",
"region:us"
] | null | "2023-03-17T22:37:06Z" | ---
language:
- en
tags:
- code
---
# -*- coding: utf-8 -*-
"""model.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1hA9Xz_VjzMVj66qS_j3A5dKcGkAfScKM
"""
pip install pycaret
from scipy import stats
import numpy as np
import pandas as pd
import pickle
import matplotlib.pyplot as plt
# create fictitious data set
simple_life_dataset = pd.DataFrame({'Age':[0, 60], 'Life Expectancy':[90, 30]})
simple_life_dataset.head()
import numpy as np
from scipy import stats
slope, intercept, r_value, p_value, std_err = stats.linregress(simple_life_dataset['Age'],simple_life_dataset['Life Expectancy'])
print('intercept: ', intercept)
print('slope: ', slope)
np.ceil(slope * 20 + intercept)
fig, axes = plt.subplots(figsize=(5,5))
x = [0,20,60]
y = [90, 70, 30]
axes.plot(x,y, color='blue', linestyle='--', marker='o')
fig.suptitle('Life Expectancy')
axes.set_xlabel('age')
axes.set_xlim([-5,100])
axes.set_ylabel('life_expectancy')
axes.set_ylim([0,100])
plt.grid()
plt.show()
# load WHO longevity data
# http://apps.who.int/gho/data/node.main.688
who_list = pd.read_csv('/content/drive/MyDrive/WHOSIS_000001,WHOSIS_000015.csv')
# save a local copy of the data set for our Flask prototype later on
who_list.to_csv('WHOSIS_000001,WHOSIS_000015.csv')
# Keep only useful features fix case display of country text
who_list = who_list[['GHO (DISPLAY)', 'YEAR (CODE)' , 'COUNTRY (DISPLAY)', 'SEX (DISPLAY)', 'Numeric']]
who_list['COUNTRY (DISPLAY)'] = [ctry.title() for ctry in who_list['COUNTRY (DISPLAY)'].values]
# print a few rows
who_list[who_list['COUNTRY (DISPLAY)']=='France'].head(10)
country = 'United States Of America'
sex = 'Male'
# pull latest entries for birth and 60 years for a country and gender
sub_set = who_list[who_list['COUNTRY (DISPLAY)'].str.startswith(country, na=False)]
sub_set = sub_set[sub_set['SEX (DISPLAY)'] == sex]
# sort by year in descending order to work with the latest read
sub_set = sub_set.sort_values('YEAR (CODE)', ascending=False)
sub_set_birth = sub_set[sub_set['GHO (DISPLAY)'] == 'Life expectancy at birth (years)']
sub_set_60 = sub_set[sub_set['GHO (DISPLAY)'] == 'Life expectancy at age 60 (years)']
print('sub_set_birth:')
print(sub_set_birth.head(5))
print('sub_set_60:')
print(sub_set_60.head(5))
# create data set with both points as shown in first example
lf_at_birth = sub_set_birth['Numeric'].values[0]
lf_at_60 = sub_set_60['Numeric'].values[0]
# let's organize our data and plot
age = [0,60]
life_expectancy = [lf_at_birth, lf_at_60]
fig, axes = plt.subplots(figsize=(5,5))
x = age
y = life_expectancy
axes.plot(x,y, color='blue', linestyle='--', marker='o')
fig.suptitle('Life Expectancy')
axes.set_xlabel('age')
axes.set_xlim([-5,100])
axes.set_ylabel('life expectancy')
axes.set_ylim([0,100])
plt.grid()
plt.show()
# model
slope, intercept, r_value, p_value, std_err = stats.linregress(age, life_expectancy)
print('intercept: ', intercept)
print('slope: ', slope)
# predict life expectancy for an 49-year-old male in the USA:
np.ceil(slope * 49 + intercept)
def get_life_expectancy(age, country, sex):
# pull latest entries for birth and 60 years
sub_set = who_list[who_list['COUNTRY (DISPLAY)'].str.startswith(country, na=False)]
sub_set = sub_set[sub_set['SEX (DISPLAY)'] == sex]
sub_set = sub_set.sort_values('YEAR (CODE)', ascending=False)
sub_set_birth = sub_set[sub_set['GHO (DISPLAY)'] == 'Life expectancy at birth (years)']
sub_set_60 = sub_set[sub_set['GHO (DISPLAY)'] == 'Life expectancy at age 60 (years)']
# not all combinations exsits so check that we have data for both
if len(sub_set_birth['Numeric']) > 0 and len(sub_set_60['Numeric']) > 0:
# create data set with both points as shown in first example
lf_at_birth = sub_set_birth['Numeric'].values[0]
lf_at_60 = sub_set_60['Numeric'].values[0]
# model
slope, intercept, r_value, p_value, std_err = stats.linregress([0,60],[lf_at_birth, lf_at_60])
# predict for the age variable
return(np.ceil(slope * age + intercept))
else:
return None
list(set(who_list['COUNTRY (DISPLAY)']))[0:10]
# test the function out using a 22-year-old Japanese female:
get_life_expectancy(22, 'Japan', 'Female')
get_life_expectancy(22,'Pakistan','Female')
get_life_expectancy(21,'India','Male')
missing_values_count = who_list.isnull().sum()
print(missing_values_count)
get_life_expectancy(80,'Pakistan','Female')
pickle.dump(get_life_expectancy,open('model.pkl','wb'))
model=pickle.load(open('model.pkl','rb')) |
areegtarek/patientcommunication-8bit | areegtarek | "2024-02-07T07:17:24Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-02-07T07:13:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
machinev/idefics-9b-LPU_model | machinev | "2024-05-27T08:51:56Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"idefics",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-05-27T06:06:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Corbanp/bert-finetuned-ner | Corbanp | "2024-03-05T03:24:14Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-03-05T03:13:41Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0633
- Precision: 0.9356
- Recall: 0.9512
- F1: 0.9433
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0742 | 1.0 | 1756 | 0.0678 | 0.9049 | 0.9334 | 0.9189 | 0.9817 |
| 0.0353 | 2.0 | 3512 | 0.0629 | 0.9342 | 0.9468 | 0.9405 | 0.9855 |
| 0.0213 | 3.0 | 5268 | 0.0633 | 0.9356 | 0.9512 | 0.9433 | 0.9865 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Bluezealot/ppo-LunarLander-v2 | Bluezealot | "2024-04-27T04:32:50Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-27T04:32:32Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.16 +/- 23.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-af-25-percent-low-high-nv-embed | AdamKasumovic | "2024-06-20T02:25:31Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-20T02:22:41Z" | ---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FounderOfHuggingface/gpt2_gen_lora_r16_dbpedia_14_t75_e5_member_shadow38 | FounderOfHuggingface | "2024-01-10T06:35:10Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2024-01-10T06:35:08Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
John6666/illustrioustoon-mix-v20-sdxl | John6666 | "2025-01-21T18:00:32Z" | 761 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cartoon",
"toon",
"drawing",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-01-21T17:56:05Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cartoon
- toon
- drawing
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1074049/illustrioustoon-mix?modelVersionId=1303815).
This model created by [SpaceHamster](https://civitai.com/user/SpaceHamster).
|
mradermacher/Hypernova-GGUF | mradermacher | "2025-01-05T11:37:33Z" | 31 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Hypernova",
"base_model:quantized:mergekit-community/Hypernova",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-05T11:15:48Z" | ---
base_model: mergekit-community/Hypernova
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/Hypernova
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hypernova-GGUF/resolve/main/Hypernova.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE-textCLS-RHEOLOGY | jonas-luehrs | "2023-08-20T13:58:10Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE",
"base_model:finetune:jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-20T13:53:07Z" | ---
license: apache-2.0
base_model: jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-MLP-scirepeval-chemistry-LARGE-textCLS-RHEOLOGY
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-MLP-scirepeval-chemistry-LARGE-textCLS-RHEOLOGY
This model is a fine-tuned version of [jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE](https://huggingface.co/jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5837
- Accuracy: 0.7702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.204 | 1.0 | 46 | 0.9131 | 0.6832 |
| 0.7606 | 2.0 | 92 | 0.6950 | 0.7391 |
| 0.5421 | 3.0 | 138 | 0.6210 | 0.7453 |
| 0.3908 | 4.0 | 184 | 0.5870 | 0.7578 |
| 0.3206 | 5.0 | 230 | 0.5837 | 0.7702 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
irzaevdev/Papito | irzaevdev | "2024-03-18T23:06:47Z" | 0 | 0 | fastai | [
"fastai",
"biology",
"text-to-image",
"ru",
"dataset:argilla/OpenHermesPreferences",
"license:openrail",
"region:us"
] | text-to-image | "2024-03-18T23:05:30Z" | ---
license: openrail
datasets:
- argilla/OpenHermesPreferences
language:
- ru
metrics:
- brier_score
- bertscore
library_name: fastai
pipeline_tag: text-to-image
tags:
- biology
--- |
Griffin88/phi-2 | Griffin88 | "2024-02-25T21:32:57Z" | 48 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-25T21:29:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adjohn1313/wizard_sft_blackbox_rlhf_30kl | adjohn1313 | "2024-02-28T04:51:36Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-02-27T19:26:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atmikah/Reinforce-CartPole-v1 | atmikah | "2024-02-09T00:31:27Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-09T00:31:18Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
felixbrock/reverse-unsloth-llama3-8b | felixbrock | "2024-04-24T19:01:44Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-24T16:06:39Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** felixbrock
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso04/0bb052bf-e628-48cc-9694-00e9ce6d5143 | lesso04 | "2025-02-20T21:26:56Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | "2025-02-20T21:14:00Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0bb052bf-e628-48cc-9694-00e9ce6d5143
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 0bb052bf-e628-48cc-9694-00e9ce6d5143
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 15.0754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 7.2320 |
| 5.3868 | 0.0082 | 50 | 7.2946 |
| 6.4007 | 0.0164 | 100 | 8.0696 |
| 7.5671 | 0.0246 | 150 | 9.0704 |
| 11.9007 | 0.0329 | 200 | 15.0754 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Team-PIXEL/pixel-base-finetuned-squadv1 | Team-PIXEL | "2022-07-14T13:05:00Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-07-14T13:00:33Z" | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: pixel-base-finetuned-squadv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-squad-v1
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Subsets and Splits