modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 06:29:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 06:29:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
erbacher/zephyr-7b-proimg-qlora-user | erbacher | "2024-02-29T13:43:43Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:erbacher/proactive_image_generation_user",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-02-29T09:43:53Z" | ---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-beta
datasets:
- erbacher/proactive_image_generation_user
model-index:
- name: zephyr-7b-proimg-qlora-user
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-proimg-qlora-user
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the erbacher/proactive_image_generation_user dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5498 | 1.0 | 226 | 0.5578 |
| 0.4788 | 2.0 | 452 | 0.5577 |
| 0.3871 | 3.0 | 678 | 0.5822 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.1+cu118
- Datasets 2.14.6
- Tokenizers 0.15.2 |
NeginShams/cross_encoder_v2 | NeginShams | "2024-05-09T14:34:06Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-09T14:33:26Z" | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alvdansen/haunted-linework | alvdansen | "2024-06-16T18:58:13Z" | 37 | 8 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-06-16T18:58:04Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: A little boy with a baseball cap and a t-shirt
parameters:
negative_prompt: bad, messy, ugly
output:
url: images/ComfyUI_01897_.png
- text: A curious girl, exploring the backyard
parameters:
negative_prompt: bad, messy, ugly
output:
url: images/ComfyUI_01888_.png
- text: >-
A young princess with long, braided hair, wearing a simple dress and a
flower crown
parameters:
negative_prompt: bad, messy, ugly
output:
url: images/ComfyUI_01876_.png
- text: >-
a woman with blonde-brown hair and glasses, blue eyes, white background,
baggy band t-shirt
parameters:
negative_prompt: bad, messy, ugly
output:
url: images/ComfyUI_01454_.png
- text: A young man with a beard and a flannel shirt, holding a coffee
output:
url: images/ComfyUI_01903_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: creativeml-openrail-m
---
# Haunted Linework
<Gallery />
## Model description
This is a first attempt at a somewhat difficult clean line/flat lay illustration style. This is definitely a model I plan to revisit, but for now enjoy!
This is for research and fun, please contact regarding commercial use.
## Download model
Weights for this model are available in Safetensors format.
[Download](/alvdansen/haunted-linework/tree/main) them in the Files & versions tab.
|
mradermacher/Dendrite-L3-10B-GGUF | mradermacher | "2024-06-13T04:48:18Z" | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Envoid/Dendrite-L3-10B",
"base_model:quantized:Envoid/Dendrite-L3-10B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-13T02:42:24Z" | ---
base_model: Envoid/Dendrite-L3-10B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Envoid/Dendrite-L3-10B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.IQ3_XS.gguf) | IQ3_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q3_K_S.gguf) | Q3_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.IQ3_S.gguf) | IQ3_S | 4.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.IQ3_M.gguf) | IQ3_M | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q3_K_M.gguf) | Q3_K_M | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q3_K_L.gguf) | Q3_K_L | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.IQ4_XS.gguf) | IQ4_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q4_K_S.gguf) | Q4_K_S | 5.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q4_K_M.gguf) | Q4_K_M | 6.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q5_K_S.gguf) | Q5_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q5_K_M.gguf) | Q5_K_M | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q6_K.gguf) | Q6_K | 8.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF/resolve/main/Dendrite-L3-10B.Q8_0.gguf) | Q8_0 | 10.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
genki10/Version19ASAP_FineTuningBERT_AugV19_k10_task1_organization_k10_k10_fold1 | genki10 | "2025-03-09T18:56:04Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-09T18:44:46Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Version19ASAP_FineTuningBERT_AugV19_k10_task1_organization_k10_k10_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Version19ASAP_FineTuningBERT_AugV19_k10_task1_organization_k10_k10_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7782
- Qwk: 0.6039
- Mse: 0.7781
- Rmse: 0.8821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 1.0 | 3 | 8.3478 | -0.0002 | 8.3452 | 2.8888 |
| No log | 2.0 | 6 | 5.6976 | -0.0150 | 5.6955 | 2.3865 |
| No log | 3.0 | 9 | 4.3287 | 0.0238 | 4.3266 | 2.0800 |
| No log | 4.0 | 12 | 3.1515 | 0.0 | 3.1496 | 1.7747 |
| No log | 5.0 | 15 | 2.2591 | 0.1001 | 2.2574 | 1.5025 |
| No log | 6.0 | 18 | 1.7762 | 0.1531 | 1.7745 | 1.3321 |
| No log | 7.0 | 21 | 1.3074 | 0.0315 | 1.3059 | 1.1428 |
| No log | 8.0 | 24 | 0.9152 | 0.0106 | 0.9140 | 0.9560 |
| No log | 9.0 | 27 | 0.9255 | 0.0514 | 0.9242 | 0.9614 |
| No log | 10.0 | 30 | 0.7047 | 0.3215 | 0.7037 | 0.8389 |
| No log | 11.0 | 33 | 0.7508 | 0.2041 | 0.7499 | 0.8660 |
| No log | 12.0 | 36 | 1.0751 | 0.0802 | 1.0743 | 1.0365 |
| No log | 13.0 | 39 | 0.8153 | 0.3131 | 0.8147 | 0.9026 |
| No log | 14.0 | 42 | 1.4936 | 0.2166 | 1.4932 | 1.2220 |
| No log | 15.0 | 45 | 1.1947 | 0.3352 | 1.1944 | 1.0929 |
| No log | 16.0 | 48 | 0.9237 | 0.4671 | 0.9235 | 0.9610 |
| No log | 17.0 | 51 | 0.5517 | 0.6337 | 0.5515 | 0.7427 |
| No log | 18.0 | 54 | 0.6612 | 0.6515 | 0.6613 | 0.8132 |
| No log | 19.0 | 57 | 0.4687 | 0.7224 | 0.4686 | 0.6845 |
| No log | 20.0 | 60 | 2.0745 | 0.3891 | 2.0750 | 1.4405 |
| No log | 21.0 | 63 | 1.1602 | 0.5395 | 1.1606 | 1.0773 |
| No log | 22.0 | 66 | 0.4712 | 0.7088 | 0.4709 | 0.6862 |
| No log | 23.0 | 69 | 2.4266 | 0.3464 | 2.4271 | 1.5579 |
| No log | 24.0 | 72 | 0.9504 | 0.5970 | 0.9506 | 0.9750 |
| No log | 25.0 | 75 | 0.5562 | 0.6516 | 0.5557 | 0.7455 |
| No log | 26.0 | 78 | 0.8528 | 0.6004 | 0.8530 | 0.9236 |
| No log | 27.0 | 81 | 0.8569 | 0.5762 | 0.8570 | 0.9258 |
| No log | 28.0 | 84 | 1.3774 | 0.4773 | 1.3778 | 1.1738 |
| No log | 29.0 | 87 | 0.5879 | 0.6809 | 0.5879 | 0.7667 |
| No log | 30.0 | 90 | 0.8633 | 0.6075 | 0.8636 | 0.9293 |
| No log | 31.0 | 93 | 1.6303 | 0.4119 | 1.6306 | 1.2769 |
| No log | 32.0 | 96 | 0.7769 | 0.6040 | 0.7770 | 0.8815 |
| No log | 33.0 | 99 | 1.1635 | 0.5434 | 1.1638 | 1.0788 |
| No log | 34.0 | 102 | 0.9914 | 0.5757 | 0.9916 | 0.9958 |
| No log | 35.0 | 105 | 0.9376 | 0.5643 | 0.9377 | 0.9683 |
| No log | 36.0 | 108 | 1.2371 | 0.4862 | 1.2372 | 1.1123 |
| No log | 37.0 | 111 | 0.6587 | 0.6555 | 0.6586 | 0.8116 |
| No log | 38.0 | 114 | 1.0194 | 0.5597 | 1.0194 | 1.0097 |
| No log | 39.0 | 117 | 0.7782 | 0.6039 | 0.7781 | 0.8821 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
aroot/eng-fra-simcse_random_ssblu | aroot | "2023-07-06T18:11:02Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-06T17:52:40Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_random_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_random_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1512
- Bleu: 31.7456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf | RichardErkhov | "2024-06-26T09:16:10Z" | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-06-25T05:27:19Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
h2ogpt-4096-llama2-70b-chat - GGUF
- Model creator: https://huggingface.co/h2oai/
- Original model: https://huggingface.co/h2oai/h2ogpt-4096-llama2-70b-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [h2ogpt-4096-llama2-70b-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.Q2_K.gguf) | Q2_K | 23.71GB |
| [h2ogpt-4096-llama2-70b-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [h2ogpt-4096-llama2-70b-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [h2ogpt-4096-llama2-70b-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [h2ogpt-4096-llama2-70b-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [h2ogpt-4096-llama2-70b-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.Q3_K.gguf) | Q3_K | 30.99GB |
| [h2ogpt-4096-llama2-70b-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [h2ogpt-4096-llama2-70b-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [h2ogpt-4096-llama2-70b-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [h2ogpt-4096-llama2-70b-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.Q4_0.gguf) | Q4_0 | 36.2GB |
| [h2ogpt-4096-llama2-70b-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [h2ogpt-4096-llama2-70b-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/blob/main/h2ogpt-4096-llama2-70b-chat.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [h2ogpt-4096-llama2-70b-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q4_K | 38.58GB |
| [h2ogpt-4096-llama2-70b-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [h2ogpt-4096-llama2-70b-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q4_1 | 40.2GB |
| [h2ogpt-4096-llama2-70b-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q5_0 | 44.2GB |
| [h2ogpt-4096-llama2-70b-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [h2ogpt-4096-llama2-70b-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q5_K | 45.41GB |
| [h2ogpt-4096-llama2-70b-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [h2ogpt-4096-llama2-70b-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q5_1 | 48.2GB |
| [h2ogpt-4096-llama2-70b-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q6_K | 52.7GB |
| [h2ogpt-4096-llama2-70b-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-70b-chat-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
inference: false
language:
- en
license: llama2
model_type: llama
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- h2ogpt
---
h2oGPT clone of [Meta's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf).
Try it live on our [h2oGPT demo](https://gpt.h2o.ai) with side-by-side LLM comparisons and private document chat!
See how it compares to other models on our [LLM Leaderboard](https://evalgpt.ai/)!
See more at [H2O.ai](https://h2o.ai/)
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 8192, padding_idx=0)
(layers): ModuleList(
(0-79): 80 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear4bit(in_features=8192, out_features=8192, bias=False)
(k_proj): Linear4bit(in_features=8192, out_features=1024, bias=False)
(v_proj): Linear4bit(in_features=8192, out_features=1024, bias=False)
(o_proj): Linear4bit(in_features=8192, out_features=8192, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear4bit(in_features=8192, out_features=28672, bias=False)
(up_proj): Linear4bit(in_features=8192, out_features=28672, bias=False)
(down_proj): Linear4bit(in_features=28672, out_features=8192, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=8192, out_features=32000, bias=False)
)
```
|
Naying0206/b2b-lora-ar | Naying0206 | "2024-04-13T18:52:42Z" | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:facebook/bart-base",
"base_model:adapter:facebook/bart-base",
"region:us"
] | null | "2024-04-12T01:36:23Z" | ---
library_name: peft
base_model: facebook/bart-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |
NasimB/guten-rarity-all-2p5k-log-rarity-all-sort | NasimB | "2023-07-15T11:10:36Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-15T09:18:12Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-2p5k-log-rarity-all-sort
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-2p5k-log-rarity-all-sort
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.69 | 0.29 | 500 | 5.6272 |
| 5.3349 | 0.59 | 1000 | 5.1982 |
| 4.9818 | 0.88 | 1500 | 4.9441 |
| 4.7024 | 1.17 | 2000 | 4.7940 |
| 4.5531 | 1.47 | 2500 | 4.6766 |
| 4.4445 | 1.76 | 3000 | 4.5629 |
| 4.3064 | 2.05 | 3500 | 4.4888 |
| 4.12 | 2.35 | 4000 | 4.4409 |
| 4.0994 | 2.64 | 4500 | 4.3854 |
| 4.0596 | 2.93 | 5000 | 4.3289 |
| 3.8415 | 3.23 | 5500 | 4.3258 |
| 3.7949 | 3.52 | 6000 | 4.2992 |
| 3.7753 | 3.81 | 6500 | 4.2626 |
| 3.6705 | 4.11 | 7000 | 4.2631 |
| 3.5128 | 4.4 | 7500 | 4.2550 |
| 3.5022 | 4.69 | 8000 | 4.2439 |
| 3.4902 | 4.99 | 8500 | 4.2293 |
| 3.3248 | 5.28 | 9000 | 4.2426 |
| 3.3111 | 5.57 | 9500 | 4.2419 |
| 3.3138 | 5.87 | 10000 | 4.2408 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
modelmaker/melanie | modelmaker | "2023-07-05T05:26:44Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"am",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
] | text-to-image | "2023-07-05T05:15:40Z" | ---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- am
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-to-image
--- |
vazish/all-MiniLM-L6-v2-fine-tuned | vazish | "2025-02-10T17:41:16Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:49800",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-10T17:41:03Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:429643
- loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: Oracle Cloud - Infrastructure and Platform Services for Enterprises
sentences:
- PulseAudio - Ubuntu Wiki
- Documentation page not found - Read the Docs
- Dwarf Fortress beginner tips - Video Games on Sports Illustrated
- source_sentence: Suggest opt in User Test - Google Slides
sentences:
- ReleaseEngineering/TryServer - MozillaWiki
- Dwarf Fortress beginner tips - Video Games on Sports Illustrated
- Tutanota - Private Mailbox with End-to-End Encryption and Calendar
- source_sentence: https://portal.naviabenefits.com/part/prioritytasks.aspx
sentences:
- What to Expect - Pregnancy and Parenting Tips, Week-by-Week Guides
- Parents.com - Articles, Recipes, and Ideas for Family Activities
- Pinterest - Boards for Collecting and Sharing Inspiration on Any Topic
- source_sentence: Apple Music - Web Player
sentences:
- BMW Connected Drive - Home Assistant
- Mary Stewart Phillips (1862-1928) - Find a Grave Memorial
- Sky Sports - Football, Formula 1, Cricket, and More
- source_sentence: Tidal - High-Fidelity Music Streaming with Master Quality Audio
sentences:
- Walmart - Everyday Low Prices on Groceries, Electronics, and More
- Notion - Integrated Workspace for Notes, Tasks, Databases, and Wikis
- Ambient Dreams Playlist on Amazon Music
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.9822505655251419
name: Pearson Cosine
- type: spearman_cosine
value: 0.2607864200673379
name: Spearman Cosine
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("vazish/all-MiniLM-L6-v2-fine-tuned_0")
# Run inference
sentences = [
'Tidal - High-Fidelity Music Streaming with Master Quality Audio',
'Walmart - Everyday Low Prices on Groceries, Electronics, and More',
'Notion - Integrated Workspace for Notes, Tasks, Databases, and Wikis',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.9823 |
| **spearman_cosine** | **0.2608** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 49,800 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 10 tokens</li><li>mean: 14.76 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 14.64 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.04</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|:-----------------|
| <code>TripAdvisor - Hotel Reviews, Photos, and Travel Forums</code> | <code>Docker Hub - Container Image Repository for DevOps Environments</code> | <code>0.0</code> |
| <code>Mastodon - Decentralized Social Media for Niche Communities</code> | <code>Allrecipes - User-Submitted Recipes, Reviews, and Cooking Tips</code> | <code>0.0</code> |
| <code>YouTube Music - Music Videos, Official Albums, and Live Performances</code> | <code>ESPN - Sports News, Live Scores, Stats, and Highlights</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0372 | 500 | 0.0218 | - |
| 0.0745 | 1000 | 0.0151 | - |
| 0.1117 | 1500 | 0.0113 | - |
| 0.1490 | 2000 | 0.0076 | - |
| 0.1862 | 2500 | 0.0063 | - |
| 0.2234 | 3000 | 0.0054 | - |
| 0.2607 | 3500 | 0.0045 | - |
| 0.2979 | 4000 | 0.0041 | - |
| 0.3351 | 4500 | 0.0027 | - |
| 0.3724 | 5000 | 0.0028 | - |
| 0.4096 | 5500 | 0.0026 | - |
| 0.4469 | 6000 | 0.0021 | - |
| 0.4841 | 6500 | 0.0019 | - |
| 0.5213 | 7000 | 0.0022 | - |
| 0.5586 | 7500 | 0.0017 | - |
| 0.5958 | 8000 | 0.0018 | - |
| 0.6331 | 8500 | 0.0015 | - |
| 0.6703 | 9000 | 0.0015 | - |
| 0.7075 | 9500 | 0.0018 | - |
| 0.7448 | 10000 | 0.0014 | - |
| 0.7820 | 10500 | 0.0017 | - |
| 0.8192 | 11000 | 0.0012 | - |
| 0.8565 | 11500 | 0.0014 | - |
| 0.8937 | 12000 | 0.001 | - |
| 0.9310 | 12500 | 0.0011 | - |
| 0.9682 | 13000 | 0.001 | - |
| 1.0054 | 13500 | 0.0009 | - |
| 1.0427 | 14000 | 0.0011 | - |
| 1.0799 | 14500 | 0.001 | - |
| 1.1172 | 15000 | 0.0009 | - |
| 1.1544 | 15500 | 0.0008 | - |
| 1.1916 | 16000 | 0.001 | - |
| 1.2289 | 16500 | 0.0011 | - |
| 1.2661 | 17000 | 0.0011 | - |
| 1.3033 | 17500 | 0.0006 | - |
| 1.3406 | 18000 | 0.0011 | - |
| 1.3778 | 18500 | 0.0008 | - |
| 1.4151 | 19000 | 0.0011 | - |
| 1.4523 | 19500 | 0.0009 | - |
| 1.4895 | 20000 | 0.0011 | - |
| 1.5268 | 20500 | 0.0009 | - |
| 1.5640 | 21000 | 0.0009 | - |
| 1.6013 | 21500 | 0.0008 | - |
| 1.6385 | 22000 | 0.0005 | - |
| 1.6757 | 22500 | 0.001 | - |
| 1.7130 | 23000 | 0.0008 | - |
| 1.7502 | 23500 | 0.0007 | - |
| 1.7874 | 24000 | 0.0007 | - |
| 1.8247 | 24500 | 0.0008 | - |
| 1.8619 | 25000 | 0.001 | - |
| 1.8992 | 25500 | 0.0009 | - |
| 1.9364 | 26000 | 0.0008 | - |
| 1.9736 | 26500 | 0.0009 | - |
| 2.0109 | 27000 | 0.0007 | - |
| 2.0481 | 27500 | 0.0006 | - |
| 2.0854 | 28000 | 0.0007 | - |
| 2.1226 | 28500 | 0.0006 | - |
| 2.1598 | 29000 | 0.0007 | - |
| 2.1971 | 29500 | 0.001 | - |
| 2.2343 | 30000 | 0.0006 | - |
| 2.2715 | 30500 | 0.0006 | - |
| 2.3088 | 31000 | 0.001 | - |
| 2.3460 | 31500 | 0.0007 | - |
| 2.3833 | 32000 | 0.0008 | - |
| 2.4205 | 32500 | 0.0006 | - |
| 2.4577 | 33000 | 0.0007 | - |
| 2.4950 | 33500 | 0.0007 | - |
| 2.5322 | 34000 | 0.001 | - |
| 2.5694 | 34500 | 0.0007 | - |
| 2.6067 | 35000 | 0.0007 | - |
| 2.6439 | 35500 | 0.0008 | - |
| 2.6812 | 36000 | 0.0007 | - |
| 2.7184 | 36500 | 0.0006 | - |
| 2.7556 | 37000 | 0.0007 | - |
| 2.7929 | 37500 | 0.0007 | - |
| 2.8301 | 38000 | 0.0005 | - |
| 2.8674 | 38500 | 0.0009 | - |
| 2.9046 | 39000 | 0.0006 | - |
| 2.9418 | 39500 | 0.0007 | - |
| 2.9791 | 40000 | 0.0008 | - |
| -1 | -1 | - | 0.2608 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
adleme94/borges_clm-model | adleme94 | "2023-08-22T20:03:03Z" | 202 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-17T21:47:53Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: borges_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# borges_clm-model
This model is a fine-tuned version of [DeepESP/gpt2-spanish-medium](https://huggingface.co/DeepESP/gpt2-spanish-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 10 | 3.9138 |
| No log | 2.0 | 20 | 3.8214 |
| No log | 3.0 | 30 | 3.7991 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
IDEA-CCNL/Randeng-MegatronT5-770M | IDEA-CCNL | "2023-05-26T06:24:22Z" | 168 | 7 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-03-02T23:29:04Z" | ---
language:
- zh
license: apache-2.0
inference: false
---
# Randeng-MegatronT5-770M
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
善于处理NLT任务,中文版的T5-large。
Good at solving NLT tasks, Chinese T5-large.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言转换 NLT | 燃灯 Randeng | MegatronT5 | 770M | 中文-Chinese |
## 模型信息 Model Information
为了得到一个大规模的中文版的T5,我们使用了Megatron-LM的方法和悟道语料库(180G版本)用于预训练。具体地,我们在预训练阶段中使用了[Megatron-LM](https://github.com/NVIDIA/Megatron-LM) 大概花费了16张A100约14天。
To get a large-scale Chinese T5, we use of [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) and WuDao Corpora (180 GB version) for pre-training. Specifically, in the pre-training phase which cost about 14 days with 16 A100 GPUs.
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有Randeng-MegatronT5-770M相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of Randeng-MegatronT5-770M in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Randeng-MegatronT5-770M and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### 加载模型 Loading Models
```python
from fengshen import T5ForConditionalGeneration
from fengshen import T5Config
from fengshen import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
config = T5Config.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
model = T5ForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
franckloic/ddpm-butterflies-128 | franckloic | "2023-08-26T13:37:19Z" | 0 | 0 | null | [
"tensorboard",
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-08-26T12:28:52Z" | ---
license: creativeml-openrail-m
---
|
ChauNguyen23/distilbert-base-uncased-finetuned-imdb | ChauNguyen23 | "2022-07-07T02:54:46Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-07-07T02:48:22Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Kiefels/dwayne-dibley-flux-v2 | Kiefels | "2025-02-11T14:17:15Z" | 65 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-25T21:46:04Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/dwayne-dibley-flux-v2_003360_00_20250125214426.png
text: Dwayne Dibbley, Dwayne Dibley, Duane Dibley
- text: >-
Dwayne Dibbley, is standing in a 1980s disco dancefloor wearing flared tweed
trousers, brown plastic open toed sandals and a white nylon shirt, moving
embarrasingly toward some fit women
output:
url: images/example_uft6bsu1o.png
- text: >-
Dwayne Dibbley, is standing in a 1970s disco dancefloor wearing flared tweed
trousers, brown plastic open toed sandals and a white nylon shirt, dancing
like a dork
output:
url: images/example_kwmo9i51t.png
- text: >-
Dwayne Dibbley, holding up an old thermos flask and a blue tooth brush,
smiling and happy as he is stood ready to go out on a date
output:
url: images/example_heirs6oci.png
- text: >-
Dwayne Dibley is opening a bottle of beer labelled "Red Dwarf, Wicked
Strength Lager" using just his teeth.
output:
url: images/example_0vm2ystln.png
- text: >-
Tall and skinny Dwayne Dibley , wide angle full body shot in extreme detail
8K , standing on a train station platform, holding a placard saying I'm a no
sense gimboid!!!, wearing a green Anorak, brown corduroy, flared trousers,
brown plastic sandals and white socks.
output:
url: images/example_imvnqid7q.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Dwayne Dibbley, Dwayne Dibley, Duane Dibley
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# dwayne-dibley-flux-v2
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Dwayne Dibbley, Dwayne Dibley, Duane Dibley` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Thomas-Yang/lora_model | Thomas-Yang | "2025-02-19T08:44:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-19T07:08:50Z" | ---
base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Thomas-Yang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shtilev/medical_embedded_v1 | shtilev | "2025-03-29T11:28:02Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"multilingual",
"ar",
"bg",
"ca",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"fi",
"fr",
"gl",
"gu",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"it",
"ja",
"ka",
"ko",
"ku",
"lt",
"lv",
"mk",
"mn",
"mr",
"ms",
"my",
"nb",
"nl",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sq",
"sr",
"sv",
"th",
"tr",
"uk",
"ur",
"vi",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-29T11:21:54Z" | ---
language:
- multilingual
- ar
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- fa
- fi
- fr
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- it
- ja
- ka
- ko
- ku
- lt
- lv
- mk
- mn
- mr
- ms
- my
- nb
- nl
- pl
- pt
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- th
- tr
- uk
- ur
- vi
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language_bcp47:
- fr-ca
- pt-br
- zh-cn
- zh-tw
pipeline_tag: sentence-similarity
---
# sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('shtilev/medical_embedded_v1')
model = AutoModel.from_pretrained('shtilev/medical_embedded_v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, average pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
microsoft/focalnet-base | microsoft | "2023-05-03T16:17:22Z" | 243 | 0 | transformers | [
"transformers",
"pytorch",
"focalnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2203.11926",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-04-17T14:57:14Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# FocalNet (tiny-sized large reception field model)
FocalNet model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [Focal Modulation Networks
](https://arxiv.org/abs/2203.11926) by Yang et al. and first released in [this repository](https://github.com/microsoft/FocalNet).
Disclaimer: The team releasing FocalNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Focul Modulation Networks are an alternative to Vision Transformers, where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision.
Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its
content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Vision Transformers, Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=focalnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import FocalNetImageProcessor, FocalNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
preprocessor = FocalNetImageProcessor.from_pretrained("microsoft/focalnet-base")
model = FocalNetForImageClassification.from_pretrained("microsoft/focalnet-base")
inputs = preprocessor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/focalnet).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2203-11926,
author = {Jianwei Yang and
Chunyuan Li and
Jianfeng Gao},
title = {Focal Modulation Networks},
journal = {CoRR},
volume = {abs/2203.11926},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2203.11926},
doi = {10.48550/arXiv.2203.11926},
eprinttype = {arXiv},
eprint = {2203.11926},
timestamp = {Tue, 29 Mar 2022 18:07:24 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2203-11926.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Gholamreza/distilbert-fa-zwnj-base-finetuned-2epoch-pquad | Gholamreza | "2023-02-19T14:27:08Z" | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:pquad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-02-18T19:17:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pquad
model-index:
- name: distilbert-fa-zwnj-base-finetuned-2epoch-pquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-fa-zwnj-base-finetuned-2epoch-pquad
This model is a fine-tuned version of [HooshvareLab/distilbert-fa-zwnj-base](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base) on the pquad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1522 | 1.0 | 4003 | 1.1435 |
| 0.8579 | 2.0 | 8006 | 1.1089 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mradermacher/calme-3.2-baguette-3b-GGUF | mradermacher | "2024-11-08T23:58:47Z" | 13 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"qwen",
"qwen2.5",
"finetune",
"french",
"english",
"fr",
"en",
"dataset:MaziyarPanahi/french_instruct_sharegpt",
"dataset:MaziyarPanahi/calme-legalkit-v0.2",
"base_model:MaziyarPanahi/calme-3.2-baguette-3b",
"base_model:quantized:MaziyarPanahi/calme-3.2-baguette-3b",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-08T23:33:59Z" | ---
base_model: MaziyarPanahi/calme-3.2-baguette-3b
datasets:
- MaziyarPanahi/french_instruct_sharegpt
- MaziyarPanahi/calme-legalkit-v0.2
language:
- fr
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
license_name: qwen-research
model_creator: MaziyarPanahi
model_name: calme-3.2-baguette-3b
quantized_by: mradermacher
tags:
- chat
- qwen
- qwen2.5
- finetune
- french
- english
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/calme-3.2-baguette-3b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/calme-3.2-baguette-3b-GGUF/resolve/main/calme-3.2-baguette-3b.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
open-paws/cultural_sensitivity_prediction | open-paws | "2025-02-22T18:14:34Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"distilbert",
"autotrain",
"text-regression",
"dataset:samtuckervegan/cultural",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"region:us"
] | null | "2025-02-22T18:01:25Z" |
---
tags:
- autotrain
- text-regression
base_model: distilbert/distilbert-base-uncased
widget:
- text: "I love AutoTrain"
datasets:
- samtuckervegan/cultural
---
# Model Trained Using AutoTrain
- Problem type: Text Regression
## Validation Metrics
loss: 0.01486926805227995
mse: 0.014865249395370483
mae: 0.08807627856731415
r2: 0.24985045194625854
rmse: 0.12192312904191101
explained_variance: 0.24985826015472412
|
esmarquez17/hate-social-network-adversarial | esmarquez17 | "2023-12-07T20:53:17Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:esmarquez17/fine-tunning-roberta-bne-hate-offensive",
"base_model:finetune:esmarquez17/fine-tunning-roberta-bne-hate-offensive",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-30T02:22:24Z" | ---
license: apache-2.0
base_model: esmarquez17/fine-tunning-roberta-bne-hate-offensive
tags:
- generated_from_keras_callback
model-index:
- name: hate-social-network-adversarial
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hate-social-network-adversarial
Este modelo es una version of [esmarquez17/fine-tunning-roberta-bne-hate-offensive](https://huggingface.co/esmarquez17/fine-tunning-roberta-bne-hate-offensive) on an unknown dataset.
evaluado con un conjunto de datos semEvaml-2019 con generación de datos adversariales:
## Model description
- Modelo Base con fine-tunning de Roberta-BNE en un corpus de guiones de teatro
- Modelo entrenado con un conjunto de datos adversarios propuestos
## Training and evaluation data
- Entrenado en corpus base SemEval-spanish
- Validado SemEval-spanish
- Testeado en corpus: HATERNET y HATECHECK
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 9385, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}
- training_precision: float32
### Training results
Training
{
Exactitud 0.9702
Precisión 0.9622
F1-score 0.9615
Recall 0.9609
}
Validacion
{ Exactitud 0.8520
Precisión 0.8558
F1-score 0.8279
Recall 0.8018
}
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MaLA-LM/lucky52-bloom-7b1-no-30 | MaLA-LM | "2025-04-08T17:03:02Z" | 18 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"multilingual",
"dataset:MBZUAI/Bactrian-X",
"arxiv:2404.04850",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-04T11:44:44Z" |
---
library_name: transformers
pipeline_tag: text-generation
language:
- multilingual
tags:
- generation
- question answering
- instruction tuning
datasets:
- MBZUAI/Bactrian-X
license: cc-by-nc-4.0
---
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian, Kazakh, Khmer, Korean, Lithuanian, Latvian, Macedonian, Malayalam, Mongolian
* Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka, kk, km, ko, lt, lv, mk, ml, mn
* Training method: full-parameter fine-tuning.
### Usage
The model checkpoint should be loaded using `transformers` library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-30")
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-30")
```
### Citation
```
@inproceedings{ji2025lucky52,
title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM},
author={Shaoxiong Ji and Pinzhen Chen},
year={2025},
booktitle={Proceedings of COLING},
url={https://arxiv.org/abs/2404.04850},
}
```
|
ISEGURA/gpt2-400-bioautex | ISEGURA | "2025-03-07T13:22:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-07T13:21:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DBangshu/V3_Base_GPT2_e5_4_4 | DBangshu | "2024-10-16T11:57:48Z" | 130 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-16T11:57:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthewchung74/phi15-study-desc-summary | matthewchung74 | "2024-03-23T22:25:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-23T22:25:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
junelee/wizard-vicuna-13b | junelee | "2023-05-04T01:23:39Z" | 2,682 | 77 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-03T20:46:24Z" | https://github.com/melodysdreamj/WizardVicunaLM |
MinaMila/llama_instbase_Adult_14ep_55 | MinaMila | "2025-04-02T02:19:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T02:16:04Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wlhb/Llama-3.1-8B-bnb-4bit-Chtagpt | wlhb | "2024-08-11T08:58:35Z" | 7 | 0 | null | [
"safetensors",
"gguf",
"llama",
"endpoints_compatible",
"region:us"
] | null | "2024-08-10T03:49:38Z" | 代码:[colab code](https://colab.research.google.com/drive/1SksjvgRbfpxNQUtYdr2mKxn-OXKHuSov?usp=sharing)
数据集:导出chatgpt数据并使用 [脚本程序整理出可训练的规范数据](https://huggingface.co/wlhb/Llama-3.1-8B-bnb-4bit-Chtagpt/blob/main/origin2trainDatasets.py)
导出Chatgpt的历史聊天记录后使用origin2trainDatasets.py清洗为符合微调模型的数据集,并通过[unsloth](https://unsloth.ai/)进行微调训练
基础模型:unsloth/Meta-Llama-3.1-8B-bnb-4bit
训练方式:lora
效果评价待确定
Code: [colab code](https://colab.research.google.com/drive/1SksjvgRbfpxNQUtYdr2mKxn-OXKHuSov?usp=sharing) Dataset: export chatgpt data and use [script program to organize trainable canonical data](https://huggingface.co/wlhb/Llama-3.1-8B-bnb-4bit-Chtagpt/blob/main/origin2trainDatasets.py)
Export Chatgpt's history chats and use origin2trainDatasets.py to clean them into datasets that match the fine-tuned model and train them with [unsloth](https://unsloth.ai/) for fine-tuning.
Base model: unsloth/Meta-Llama-3.1-8B-bnb-4bit Training method: lora
Effectiveness evaluation to be determined |
chainup244/google-gemma-2b-1718956234 | chainup244 | "2024-06-21T07:53:08Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-21T07:50:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/G0514HMA5H | Litzy619 | "2024-05-14T20:59:30Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-05-14T19:54:24Z" | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0514HMA5H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0514HMA5H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -17.7865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9281 | 0.09 | 10 | 0.0810 |
| -0.8728 | 0.18 | 20 | -2.3616 |
| -3.7424 | 0.27 | 30 | -5.5992 |
| -6.9773 | 0.36 | 40 | -8.8629 |
| -10.1201 | 0.45 | 50 | -11.8272 |
| -12.8541 | 0.54 | 60 | -14.2293 |
| -15.0144 | 0.63 | 70 | -15.8856 |
| -16.3327 | 0.73 | 80 | -16.8287 |
| -17.0246 | 0.82 | 90 | -17.2467 |
| -17.335 | 0.91 | 100 | -17.4367 |
| -17.4797 | 1.0 | 110 | -17.5384 |
| -17.5709 | 1.09 | 120 | -17.6024 |
| -17.6217 | 1.18 | 130 | -17.6413 |
| -17.6522 | 1.27 | 140 | -17.6697 |
| -17.6777 | 1.36 | 150 | -17.6893 |
| -17.6963 | 1.45 | 160 | -17.7051 |
| -17.7096 | 1.54 | 170 | -17.7187 |
| -17.7252 | 1.63 | 180 | -17.7321 |
| -17.7353 | 1.72 | 190 | -17.7430 |
| -17.7471 | 1.81 | 200 | -17.7499 |
| -17.751 | 1.9 | 210 | -17.7561 |
| -17.7563 | 1.99 | 220 | -17.7617 |
| -17.7638 | 2.08 | 230 | -17.7659 |
| -17.7726 | 2.18 | 240 | -17.7701 |
| -17.7714 | 2.27 | 250 | -17.7736 |
| -17.7766 | 2.36 | 260 | -17.7772 |
| -17.7823 | 2.45 | 270 | -17.7800 |
| -17.7809 | 2.54 | 280 | -17.7827 |
| -17.7872 | 2.63 | 290 | -17.7841 |
| -17.7876 | 2.72 | 300 | -17.7856 |
| -17.7846 | 2.81 | 310 | -17.7863 |
| -17.7907 | 2.9 | 320 | -17.7865 |
| -17.7901 | 2.99 | 330 | -17.7865 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
domci/ColBERTv2-mmarco-de-0.1 | domci | "2024-02-27T16:55:48Z" | 0 | 2 | null | [
"safetensors",
"de",
"dataset:unicamp-dl/mmarco",
"license:mit",
"region:us"
] | null | "2024-02-27T15:26:22Z" | ---
license: mit
datasets:
- unicamp-dl/mmarco
language:
- de
---
# ColBERTv2-mmarco-de-0.1
This is a German ColBERT implementation based on [colbert-ir/colbertv2.0](https://huggingface.co/colbert-ir/colbertv2.0)
- Base Model: [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased)
- Training Data: [unicamp-dl/mmarco](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) --> 10Mio random sample
- Framework used for training [RAGatouille](https://github.com/bclavie/RAGatouille) Thanks a ton [@bclavie](https://huggingface.co/bclavie) !
As I'm limited on GPU Training did not go through all the way. "Only" 10 checkpoints were trained.
# Code
My code is probably a mess, but YOLO!
## data prep
```python
from datasets import load_dataset
from ragatouille import RAGTrainer
from tqdm import tqdm
import pickle
from concurrent.futures import ThreadPoolExecutor
from tqdm.notebook import tqdm
import concurrent
SAMPLE_SIZE = -1
def int_to_string(number):
if number < 0:
return "full"
elif number < 1000:
return str(number)
elif number < 1000000:
return f"{number // 1000}K"
elif number >= 1000000:
return f"{number // 1000000}M"
def process_chunk(chunk):
return [list(item) for item in zip(chunk["query"], chunk["positive"], chunk["negative"])]
def chunked_iterable(iterable, chunk_size):
"""Yield successive chunks from iterable."""
for i in range(0, len(iterable), chunk_size):
yield iterable[i:i + chunk_size]
def process_dataset_concurrently(dataset, chunksize=1000):
with ThreadPoolExecutor() as executor:
# Wrap the dataset with tqdm for real-time updates
wrapped_dataset = tqdm(chunked_iterable(dataset, chunksize), total=(len(dataset) + chunksize - 1) // chunksize)
# Submit each chunk to the executor
futures = [executor.submit(process_chunk, chunk) for chunk in wrapped_dataset]
results = []
for future in concurrent.futures.as_completed(futures):
results.extend(future.result())
return results
dataset = load_dataset('unicamp-dl/mmarco', 'german', trust_remote_code=True)
# Shuffle the dataset and seed for reproducibility if needed
shuffled_dataset = dataset['train'].shuffle(seed=42)
if SAMPLE_SIZE > 0:
sampled_dataset = shuffled_dataset.select(range(SAMPLE_SIZE))
else:
sampled_dataset = shuffled_dataset
triplets = process_dataset_concurrently(sampled_dataset, chunksize=10000)
trainer = RAGTrainer(model_name=f"ColBERT-mmacro-de-{int_to_string(SAMPLE_SIZE)}", pretrained_model_name="dbmdz/bert-base-german-cased", language_code="de",)
trainer.prepare_training_data(raw_data=triplets, mine_hard_negatives=False)
```
## Training
```python
from datasets import load_dataset
import os
from ragatouille import RAGTrainer
from tqdm import tqdm
import pickle
from concurrent.futures import ThreadPoolExecutor
from tqdm.notebook import tqdm
import concurrent
from pathlib import Path
def int_to_string(number):
if number < 1000:
return str(number)
elif number < 1000000:
return f"{number // 1000}K"
elif number >= 1000000:
return f"{number // 1000000}M"
SAMPLE_SIZE = 1000000
trainer = RAGTrainer(model_name=f"ColBERT-mmacro-de-{int_to_string(SAMPLE_SIZE)}", pretrained_model_name="dbmdz/bert-base-german-cased", language_code="de",)
trainer.data_dir = Path("/kaggle/input/mmarco-de-10m")
trainer.train(batch_size=32,
nbits=4, # How many bits will the trained model use when compressing indexes
maxsteps=500000, # Maximum steps hard stop
use_ib_negatives=True, # Use in-batch negative to calculate loss
dim=128, # How many dimensions per embedding. 128 is the default and works well.
learning_rate=5e-6, # Learning rate, small values ([3e-6,3e-5] work best if the base model is BERT-like, 5e-6 is often the sweet spot)
doc_maxlen=256, # Maximum document length. Because of how ColBERT works, smaller chunks (128-256) work very well.
use_relu=False, # Disable ReLU -- doesn't improve performance
warmup_steps="auto", # Defaults to 10%
)
``` |
caozhanqiang/llama2-glora-finetunined-french | caozhanqiang | "2023-07-28T09:08:14Z" | 3 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-07-28T09:07:56Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
brixeus/f4e364db-8750-4a25-afa0-d99872f7af11 | brixeus | "2025-01-20T11:50:19Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | "2025-01-20T11:34:39Z" | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4e364db-8750-4a25-afa0-d99872f7af11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dfedf188e6c9e057_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dfedf188e6c9e057_train_data.json
type:
field_input: base_0
field_instruction: id
field_output: base_100_x
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: brixeus/f4e364db-8750-4a25-afa0-d99872f7af11
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/dfedf188e6c9e057_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 05278d2d-f76b-494b-9e5f-5ab11e9ea915
wandb_project: Gradients-On-Three
wandb_run: your_name
wandb_runid: 05278d2d-f76b-494b-9e5f-5ab11e9ea915
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# f4e364db-8750-4a25-afa0-d99872f7af11
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0021 | 1 | 2.2589 |
| 8.9092 | 0.0186 | 9 | 2.2142 |
| 8.1938 | 0.0372 | 18 | 1.9731 |
| 7.6565 | 0.0559 | 27 | 1.8767 |
| 7.6501 | 0.0745 | 36 | 1.8378 |
| 7.2612 | 0.0931 | 45 | 1.8157 |
| 6.9619 | 0.1117 | 54 | 1.8015 |
| 7.2999 | 0.1304 | 63 | 1.7942 |
| 7.0581 | 0.1490 | 72 | 1.7886 |
| 7.203 | 0.1676 | 81 | 1.7843 |
| 6.7914 | 0.1862 | 90 | 1.7832 |
| 7.1188 | 0.2049 | 99 | 1.7830 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
brathief/wwoo_1000_lora | brathief | "2023-05-19T17:50:37Z" | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-05-19T16:59:58Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - brathief/wwoo_1000_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
daanjiri/lab1_random | daanjiri | "2024-02-17T18:57:17Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-02-17T17:40:44Z" | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: lab1_random
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 13.64635977688655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab1_random
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5090
- Bleu: 13.6464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Lykon/AAM_AnyLora_AnimeMix-LCM | Lykon | "2023-12-07T11:03:33Z" | 6 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"anime",
"dreamshaper",
"lcm",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-12-07T10:58:42Z" | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
- anime
- dreamshaper
- lcm
duplicated_from: lykon/AAM_AnyLora_AnimeMix-LCM
pipeline_tag: text-to-image
---
# AAM_AnyLora_AnimeMix LCM
`lykon/AAM_AnyLora_AnimeMix-LCM` is a Stable Diffusion model that has been fine-tuned on [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
Please consider supporting me:
- on [Patreon](https://www.patreon.com/Lykon275)
- or [buy me a coffee](https://snipfeed.co/lykon)
## Diffusers
For more general information on how to run text-to-image models with 🧨 Diffusers, see [the docs](https://huggingface.co/docs/diffusers/using-diffusers/conditional_image_generation).
1. Installation
```
pip install diffusers transformers accelerate
```
2. Run
```py
from diffusers import AutoPipelineForText2Image, LCMScheduler
import torch
pipe = AutoPipelineForText2Image.from_pretrained('lykon/AAM_AnyLora_AnimeMix-LCM', torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "portrait photo of muscular bearded guy in a worn mech suit, light bokeh, intricate, steel metal, elegant, sharp focus, soft lighting, vibrant colors"
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=15, guidance_scale=2, generator=generator).images[0]
image.save("./image.png")
```
|
henryhe0123/pc-agent-test-1-2 | henryhe0123 | "2025-03-19T05:35:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:henryhe0123/pc-agent-test-1-2",
"base_model:finetune:henryhe0123/pc-agent-test-1-2",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-18T17:22:36Z" | ---
library_name: transformers
license: other
base_model: henryhe0123/pc-agent-test-1-2
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen2.5-VL-72B-sft-1-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-VL-72B-sft-1-2
This model is a fine-tuned version of [/inspire/hdd/ws-c6f77a66-a5f5-45dc-a4ce-1e856fe7a7b4/project/public/model/Qwen2.5-VL-72B-Instruct](https://huggingface.co//inspire/hdd/ws-c6f77a66-a5f5-45dc-a4ce-1e856fe7a7b4/project/public/model/Qwen2.5-VL-72B-Instruct) on the pcagent dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 32
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
SerhiiLebediuk/Llama-3.1-8B-bnb-4bit-devision-support | SerhiiLebediuk | "2025-03-18T15:47:58Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-11T13:48:31Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SerhiiLebediuk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nvidia/stt_en_fastconformer_hybrid_medium_streaming_80ms_pc | nvidia | "2025-02-18T13:21:47Z" | 0 | 2 | NeMo | [
"NeMo",
"nemo",
"speech-recognition",
"ASR",
"English",
"Conformer",
"Transducer",
"CTC",
"speech",
"audio",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:openslr/librispeech_asr",
"dataset:Europarl-ASR-EN",
"dataset:fisher_corpus",
"dataset:VoxPopuli-EN",
"dataset:National-Singapore-Corpus-Part-1",
"dataset:kensho/spgispeech-1000hours",
"dataset:Multilingual-LibriSpeech-2000hours",
"arxiv:2305.05084",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | "2024-12-11T15:06:37Z" | ---
license: cc-by-4.0
datasets:
- mozilla-foundation/common_voice_11_0
- openslr/librispeech_asr
- Europarl-ASR-EN
- fisher_corpus
- VoxPopuli-EN
- National-Singapore-Corpus-Part-1
- kensho/spgispeech-1000hours
- Multilingual-LibriSpeech-2000hours
language:
- en
pipeline_tag: automatic-speech-recognition
library_name: NeMo
metrics:
- WER
- CER
tags:
- speech-recognition
- ASR
- English
- Conformer
- Transducer
- CTC
- NeMo
- speech
- audio
model-index:
- name: stt_en_fastconformer_hybrid_medium_streaming_80ms_pc
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: VoxPopuli
split: test
type: VoxPopuli
args:
language: en
metrics:
- name: Test WER
type: wer
value: 8.29
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: librispeech
type: openslr/librispeech_asr
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 6.96
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: MLS
type: Multilingual-LibriSpeech-2000hours
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 11.76
---
# NVIDIA FastConformer-Hybrid medium streaming (en)
<style>
img {
display: inline-table;
vertical-align: small;
margin: 0;
padding: 0;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)|
This collection contains medium size versions of cache-aware FastConformer-Hybrid (around 32M parameters) trained on a English speech. The model is trained for streaming ASR with look-ahead of 80ms which be used for very low-latency streaming applications and has two losses: Transducer (default) and CTC.
See the section [Model Architecture](#Model-Architecture) and [NeMo documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/asr/models.html#fast-conformer) for complete architecture details.
This model is ready for commercial and non-commercial use.
## License
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
## References
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
<!-- ## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo).
We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
-->
## Model Architecture
The model is cache-aware versions of Hybrid FastConfomer which are trained for streaming ASR. You may find more info on cache-aware models here: [Cache-aware Streaming Conformer](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#cache-aware-streaming-conformer) [5].
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling.
The model is trained in a multitask setup with hybrid Transducer decoder (RNNT) and Connectionist Temporal Classification (CTC) loss.
You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer).
Model utilizes a [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) [2] tokenizer with a vocabulary size of 1024.
### Input
- **Input Type:** Audio
- **Input Format(s):** .wav files
- **Other Properties Related to Input:** 16000 Hz Mono-channel Audio, Pre-Processing Not Needed
### Output
This model provides transcribed speech as a string for a given audio sample.
- **Output Type**: Text
- **Output Format:** String
- **Output Parameters:** One Dimensional (1D)
- **Other Properties Related to Output:** May Need Inverse Text Normalization; Does Not Handle Special Characters; Outputs text in English with punctuation and capitalization.
## Limitations
The model is streaming and can output the speech as a string with punctuation and capitalization.
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on.
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="nvidia/stt_en_fastconformer_hybrid_medium_streaming_80ms_pc")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
output = asr_model.transcribe(['2086-149220-0033.wav'])
print(output[0].text)
```
### Transcribing many audio files
Using Transducer mode inference:
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_en_fastconformer_hybrid_medium_streaming_80ms_pc"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
Using CTC mode inference:
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_en_fastconformer_hybrid_medium_streaming_80ms_pc"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
decoder_type="ctc"
```
## Training
The [NVIDIA NeMo Toolkit] [3] was used for training the model for two hundred epochs.
Model is trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_hybrid_transducer_ctc/speech_to_text_hybrid_rnnt_ctc_bpe.py).
The tokenizer for these model was built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
## Training, Testing, and Evaluation Datasets
### Training Datasets
The model is trained on composite dataset comprising of around 8500 hours of English speech:
- [Librispeech](https://www.openslr.org/12)
- Data Collection Method: Automated
- Labeling Method: by Human
- [Mozilla Common Voice 11.0 English](https://commonvoice.mozilla.org/en/datasets)
- Data Collection Method: by Human
- Labeling Method: by Human
- [Europarl](https://www.statmt.org/europarl/)
- Data Collection Method: by Human
- Labeling Method: by Human
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004S13)
- Data Collection Method: Automated
- Labeling Method: by Human
- [MLS](https://www.openslr.org/94/)
- Data Collection Method: Automated
- Labeling Method: by Human
- [Voxpopuli](https://github.com/facebookresearch/voxpopuli)
- Data Collection Method: by Human
- Labeling Method: by Human
- [SPGI-1000hours](https://datasets.kensho.com/datasets/spgispeech)
- Data Collection Method: by Human
- Labeling Method: by Human
### Evaluation Datasets
- [Librispeech](https://www.openslr.org/12)
- Data Collection Method: by Human
- Labeling Method: by Human
- [Mozilla Common Voice 11.0 English](https://commonvoice.mozilla.org/en/datasets)
- Data Collection Method: by Human
- Labeling Method: by Human
- [Europarl](https://www.statmt.org/europarl/)
- Data Collection Method: by Human
- Labeling Method: by Human
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004S13)
- Data Collection Method: by Human
- Labeling Method: by Human
- [MLS](https://www.openslr.org/94/)
- Data Collection Method: by Human
- Labeling Method: by Human
- [Voxpopuli](https://github.com/facebookresearch/voxpopuli)
- Data Collection Method: by Human
- Labeling Method: by Human
- [SPGI-1000hours](https://datasets.kensho.com/datasets/spgispeech)
- Data Collection Method: by Human
- Labeling Method: by Human
### Test Datasets
- [Europarl](https://www.statmt.org/europarl/)
- Data Collection Method: by Human
- Labeling Method: by Human
- [MLS](https://www.openslr.org/94/)
- Data Collection Method: by Human
- Labeling Method: by Human
- [Voxpopuli](https://github.com/facebookresearch/voxpopuli)
- Data Collection Method: by Human
- Labeling Method: by Human
- [Librispeech](https://www.openslr.org/12)
- Data Collection Method: by Human
- Labeling Method: by Human
## Software Integration
### Supported Hardware Microarchitecture Compatibility:
- NVIDIA Ampere
- NVIDIA Blackwell
- NVIDIA Jetson
- NVIDIA Hopper
- NVIDIA Lovelace
- NVIDIA Pascal
- NVIDIA Turing
- NVIDIA Volta
### Runtime Engine
- Nemo 2.0.0
### Preferred Operating System
- Linux
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.
When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
<!-- For more detailed information on ethical considerations for this model, please see the [Model Card++](https://docs.google.com/document/d/1cFbfEnlbBG_I5hTRiYuZAI1PgdPYRfsmXpE5-zJDdXU/edit?tab=t.0#heading=h.7jylogfmrbiw) Explainability, Bias, Safety & Security, and Privacy Subcards. -->
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## Explainability
- High-Level Application and Domain: Automatic Speech Recognition
- - Describe how this model works: The model transcribes audio input into text for the English language
- Verified to have met prescribed quality standards: Yes
- Performance Metrics: Word Error Rate (WER), Character Error Rate (CER), Real-Time Factor
- Potential Known Risks: Transcripts may not be 100% accurate. Accuracy varies based on the characteristics of input audio (Domain, Use Case, Accent, Noise, Speech Type, Context of speech, etcetera).
### Performance
**Test Hardware:** A100 GPU
The performance of Automatic Speech Recognition models is measured using Word Error Rate (WER) and Char Error Rate (CER).
Since this dataset is trained on multiple domains, it will generally perform well at transcribing audio in general.
The following tables summarize the performance of the available models in this collection with the Transducer decoder.
Performances of the ASR models are reported in terms of Word Error Rate (WER%) and Inverse Real-Time Factor (RTFx) with greedy decoding on test sets.
- Transducer
|**Version**|**Tokenizer**|**Vocabulary Size**|**Librispeech Test WER**|**Librispeech Test RTFx**|**Europarl test WER**|**Europarl test RTFx**|**Voxpopuli test WER**|**Voxpopuli test RTFx**|**MLS test WER**|**MLS test RTFx**
|----------|-------------|-------------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| 2.0.0 | SentencePiece Unigram | 1024 | 6.96 | ~1600 | 11.85| ~1100 | 8.29 | 1780 | 11.76 | ~2050 |
This model is trained with punctuation and capitalization and evaluated without punctuation and capitalization
## Bias
- Was the model trained with a specific accent? No
- Have any special measures been taken to mitigate unwanted bias? No
- Participation considerations from adversely impacted groups [protected classes]
(https://www.senate.ca.gov/content/protected-classes) in model design and testing: No
## Privacy
- Generatable or reverse engineerable personal data? No
- If applicable, was a notice provided to the individuals prior to the collection of any personal data used? Not applicable
- If personal data was collected for the development of the model, was it collected directly by NVIDIA? Not applicable
- Is there dataset provenance? Yes
- If data is labeled, was it reviewed to comply with privacy laws? Yes
- Is data compliant with data subject requests for data correction or removal, if such a request was made? No, not possible with externally-sourced data
- Is a mechanism in place to honor data subject rights of access or deletion of personal data? No
- How often is the training dataset reviewed?: Before Release
## Safety & Security
### Use Case Restrictions:
- Streaming ASR model
- Model outputs text in English
- Output text requires Inverse Text Normalization
- Model is noise-sensitive
Model is not applicable for life-critical applications.
### Access Reactions:
The Principle of Least Privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training and dataset license constraints adhered to.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva) is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos). |
peterldasd/Goodjob_pj1 | peterldasd | "2025-01-30T17:13:15Z" | 8 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-01-30T17:06:35Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
marialvsantiago/5a691c61-27d4-4b83-94a9-786e9329fcca | marialvsantiago | "2025-01-25T19:27:24Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-25T18:25:08Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5a691c61-27d4-4b83-94a9-786e9329fcca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dd8680ad4c472b16_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dd8680ad4c472b16_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: marialvsantiago/5a691c61-27d4-4b83-94a9-786e9329fcca
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/dd8680ad4c472b16_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5ade1b66-53e7-4502-a577-24394950045b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5ade1b66-53e7-4502-a577-24394950045b
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 5a691c61-27d4-4b83-94a9-786e9329fcca
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 5 | nan |
| 0.0 | 0.0006 | 10 | nan |
| 0.0 | 0.0009 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
HPLT/sft-fpft-multilingual-downsampled-bloom-3b | HPLT | "2025-04-06T08:37:29Z" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"bg",
"cs",
"zh",
"de",
"fi",
"fr",
"ru",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-05T10:32:16Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
diaenra/d62080a2-d983-485f-afad-61ace279da2e | diaenra | "2025-01-19T08:12:31Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | "2025-01-19T05:33:37Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d62080a2-d983-485f-afad-61ace279da2e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 437cafc7b90d8f2d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/437cafc7b90d8f2d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: diaenra/d62080a2-d983-485f-afad-61ace279da2e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GB
micro_batch_size: 4
mlflow_experiment_name: /tmp/437cafc7b90d8f2d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: diaenra-tao-miner
wandb_mode: online
wandb_name: b612b32f-2e8a-4d95-ac6f-83bff6bbaa8a
wandb_project: tao
wandb_run: diaenra
wandb_runid: b612b32f-2e8a-4d95-ac6f-83bff6bbaa8a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: true
```
</details><br>
# d62080a2-d983-485f-afad-61ace279da2e
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.5764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 10.6059 | 1.0000 | 26471 | 10.5837 |
| 10.1376 | 2.0000 | 52942 | 10.5764 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/llama-3-gutenberg-8B-GGUF | mradermacher | "2024-05-06T09:18:22Z" | 17 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:nbeerbower/llama-3-gutenberg-8B",
"base_model:quantized:nbeerbower/llama-3-gutenberg-8B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-06T06:47:45Z" | ---
base_model: nbeerbower/llama-3-gutenberg-8B
datasets:
- jondurbin/gutenberg-dpo-v0.1
language:
- en
library_name: transformers
license: other
license_name: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nbeerbower/llama-3-gutenberg-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-gutenberg-8B-GGUF/resolve/main/llama-3-gutenberg-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
alohalukason/szonemi | alohalukason | "2025-03-09T21:39:25Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-09T21:19:35Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: szonemi
---
# Szonemi
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `szonemi` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('alohalukason/szonemi', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
sb3/ddpg-Walker2DBulletEnv-v0 | sb3 | "2022-10-11T15:19:35Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-06-02T20:43:12Z" | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- metrics:
- type: mean_reward
value: 1495.73 +/- 612.27
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **DDPG** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **DDPG** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env Walker2DBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo ddpg --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ddpg --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env Walker2DBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.0007),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
huggingtweets/90snormmcdonald | huggingtweets | "2023-01-31T03:03:17Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-01-31T03:01:50Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/90snormmcdonald/1675134192089/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1339391092/macdonald_400x400.gif')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">macdonald</div>
<div style="text-align: center; font-size: 14px;">@90snormmcdonald</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from macdonald.
| Data | macdonald |
| --- | --- |
| Tweets downloaded | 105 |
| Retweets | 0 |
| Short tweets | 4 |
| Tweets kept | 101 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rjng7zxe/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @90snormmcdonald's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rp8ijnsb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rp8ijnsb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/90snormmcdonald')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kanishka/cria-babylm2-subset-default-3e-4 | kanishka | "2024-07-25T10:58:46Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/babylm2-subset",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-25T04:16:56Z" | ---
tags:
- generated_from_trainer
datasets:
- kanishka/babylm2-subset
metrics:
- accuracy
model-index:
- name: cria-babylm2-subset-default-3e-4
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/babylm2-subset
type: kanishka/babylm2-subset
metrics:
- name: Accuracy
type: accuracy
value: 0.5183717396220663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cria-babylm2-subset-default-3e-4
This model was trained from scratch on the kanishka/babylm2-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6186
- Accuracy: 0.5184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 2.5065 | 1.0 | 14142 | 2.7266 | 0.4912 |
| 2.3323 | 2.0 | 28284 | 2.5642 | 0.5074 |
| 2.2158 | 3.0 | 42426 | 2.4670 | 0.5184 |
| 2.1109 | 4.0 | 56568 | 2.4178 | 0.5249 |
| 2.0194 | 5.0 | 70710 | 2.4001 | 0.5280 |
| 1.938 | 6.0 | 84852 | 2.4067 | 0.5291 |
| 1.8569 | 7.0 | 98994 | 2.4313 | 0.5283 |
| 1.7668 | 8.0 | 113136 | 2.4766 | 0.5260 |
| 1.6733 | 9.0 | 127278 | 2.5417 | 0.5229 |
| 1.579 | 10.0 | 141420 | 2.6186 | 0.5184 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.19.1
|
paola-md/recipe-lr2e05-wd0.1-bs32 | paola-md | "2022-08-28T04:28:49Z" | 163 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-08-28T04:15:07Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr2e05-wd0.1-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.1-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2861
- Rmse: 0.5349
- Mse: 0.2861
- Mae: 0.4436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2775 | 1.0 | 623 | 0.2744 | 0.5238 | 0.2744 | 0.4159 |
| 0.274 | 2.0 | 1246 | 0.2737 | 0.5232 | 0.2737 | 0.4163 |
| 0.2724 | 3.0 | 1869 | 0.2861 | 0.5349 | 0.2861 | 0.4436 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
lhong4759/8fd997a7-25e7-4cd2-9940-d243014883db | lhong4759 | "2025-01-19T20:49:32Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-19T20:47:44Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8fd997a7-25e7-4cd2-9940-d243014883db
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- edaa3d5d217efafe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/edaa3d5d217efafe_train_data.json
type:
field_instruction: context
field_output: completion_file
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/8fd997a7-25e7-4cd2-9940-d243014883db
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/edaa3d5d217efafe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3fb4eb2b-db1f-4607-8c33-7d7c962e083b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3fb4eb2b-db1f-4607-8c33-7d7c962e083b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8fd997a7-25e7-4cd2-9940-d243014883db
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9766 | 0.8571 | 3 | 0.8601 |
| 1.5097 | 1.1429 | 4 | 0.8512 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF | mradermacher | "2025-04-02T11:26:31Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"uncensored",
"en",
"dataset:nbeerbower/GreatFirewall-DPO",
"dataset:nbeerbower/Schule-DPO",
"dataset:nbeerbower/Purpura-DPO",
"dataset:nbeerbower/Arkhaios-DPO",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:flammenai/Date-DPO-NoAsterisks",
"dataset:flammenai/Prude-Phi3-DPO",
"dataset:Atsunori/HelpSteer2-DPO",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/Dumpling-Qwen2.5-VL-7B",
"base_model:quantized:nbeerbower/Dumpling-Qwen2.5-VL-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-02T07:48:39Z" | ---
base_model: nbeerbower/Dumpling-Qwen2.5-VL-7B
datasets:
- nbeerbower/GreatFirewall-DPO
- nbeerbower/Schule-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Arkhaios-DPO
- jondurbin/truthy-dpo-v0.1
- antiven0m/physical-reasoning-dpo
- flammenai/Date-DPO-NoAsterisks
- flammenai/Prude-Phi3-DPO
- Atsunori/HelpSteer2-DPO
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
language:
- en
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
license_name: qwen-research
quantized_by: mradermacher
tags:
- multimodal
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-VL-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Dumpling-Qwen2.5-VL-7B-GGUF/resolve/main/Dumpling-Qwen2.5-VL-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
alon-albalak/xlm-roberta-large-xquad | alon-albalak | "2023-07-01T00:31:00Z" | 266 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"multilingual",
"dataset:xquad",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
tags:
- multilingual
datasets:
- xquad
---
# xlm-roberta-large for multilingual QA
# Overview
**Language Model**: xlm-roberta-large \
**Downstream task**: Extractive QA \
**Training data**: [XQuAD](https://github.com/deepmind/xquad) \
**Testing Data**: [XQuAD](https://github.com/deepmind/xquad)
# Hyperparameters
```python
batch_size = 48
n_epochs = 13
max_seq_len = 384
doc_stride = 128
learning_rate = 3e-5
```
# Performance
Evaluated on held-out test set from XQuAD
```python
"exact_match": 87.12546816479401,
"f1": 94.77703248802527,
"test_samples": 2307
```
# Usage
## In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "alon-albalak/xlm-roberta-large-xquad"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import QAInferencer
model_name = "alon-albalak/xlm-roberta-large-xquad"
# a) Get predictions
nlp = QAInferencer.load(model_name)
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
## In Haystack
```python
reader = FARMReader(model_name_or_path="alon-albalak/xlm-roberta-large-xquad")
# or
reader = TransformersReader(model="alon-albalak/xlm-roberta-large-xquad",tokenizer="alon-albalak/xlm-roberta-large-xquad")
```
Usage instructions for FARM and Haystack were adopted from https://huggingface.co/deepset/xlm-roberta-large-squad2 |
BernTheCreator/EZO-Common-9B-gemma-2-it-Q4_0-GGUF | BernTheCreator | "2025-02-01T07:11:38Z" | 28 | 0 | transformers | [
"transformers",
"gguf",
"conversational",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:AXCXEPT/EZO-Common-9B-gemma-2-it",
"base_model:quantized:AXCXEPT/EZO-Common-9B-gemma-2-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-01T07:04:02Z" | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
tags:
- conversational
- llama-cpp
- gguf-my-repo
base_model: AXCXEPT/EZO-Common-9B-gemma-2-it
---
# BernTheCreator/EZO-Common-9B-gemma-2-it-Q4_0-GGUF
This model was converted to GGUF format from [`AXCXEPT/EZO-Common-9B-gemma-2-it`](https://huggingface.co/AXCXEPT/EZO-Common-9B-gemma-2-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AXCXEPT/EZO-Common-9B-gemma-2-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BernTheCreator/EZO-Common-9B-gemma-2-it-Q4_0-GGUF --hf-file ezo-common-9b-gemma-2-it-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BernTheCreator/EZO-Common-9B-gemma-2-it-Q4_0-GGUF --hf-file ezo-common-9b-gemma-2-it-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BernTheCreator/EZO-Common-9B-gemma-2-it-Q4_0-GGUF --hf-file ezo-common-9b-gemma-2-it-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BernTheCreator/EZO-Common-9B-gemma-2-it-Q4_0-GGUF --hf-file ezo-common-9b-gemma-2-it-q4_0.gguf -c 2048
```
|
mayitbe/bge_finetune_hadoop | mayitbe | "2024-07-06T03:28:39Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-07-06T00:59:17Z" | ---
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
widget: []
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
9wimu9/xlm-roberta-large-finetuned-sinquad-v2 | 9wimu9 | "2023-06-06T17:40:23Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-06-06T16:48:18Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-large-finetuned-sinquad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-sinquad-v2
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5061 | 0.99 | 23 | 1.3749 |
| 0.8976 | 1.98 | 46 | 0.8803 |
| 0.7572 | 2.97 | 69 | 0.7758 |
| 0.6854 | 4.0 | 93 | 0.7380 |
| 0.5903 | 4.99 | 116 | 0.7158 |
| 0.5114 | 5.98 | 139 | 0.7311 |
| 0.4291 | 6.97 | 162 | 0.7533 |
| 0.4113 | 8.0 | 186 | 0.7650 |
| 0.3564 | 8.99 | 209 | 0.7734 |
| 0.3516 | 9.89 | 230 | 0.7850 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.6.1
- Tokenizers 0.12.1
{'exact_match': 67.75914634146342, 'f1': 86.42992384115712} |
hanyundudddd/hanyundudddd | hanyundudddd | "2024-05-17T05:01:40Z" | 120 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-17T05:01:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso04/8803511f-9aa9-47ef-9843-9669f95ca86e | lesso04 | "2025-01-16T05:19:02Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-16T05:16:48Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8803511f-9aa9-47ef-9843-9669f95ca86e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: true
chat_template: llama3
datasets:
- data_files:
- ceb2c02370daa871_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ceb2c02370daa871_train_data.json
type:
field_instruction: Question
field_output: Answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/8803511f-9aa9-47ef-9843-9669f95ca86e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/ceb2c02370daa871_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 38b8445d-9f66-4fda-a8ef-9f3949de9864
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 38b8445d-9f66-4fda-a8ef-9f3949de9864
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8803511f-9aa9-47ef-9843-9669f95ca86e
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0033 | 1 | nan |
| 0.0 | 0.0163 | 5 | nan |
| 0.0 | 0.0327 | 10 | nan |
| 0.0 | 0.0490 | 15 | nan |
| 0.0 | 0.0654 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Pamzyy/sinhala_gpt2 | Pamzyy | "2024-09-03T06:31:44Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-28T15:15:53Z" | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: sinhala_gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sinhala_gpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.5768 | 0.0737 | 20 | 11.7031 |
| 10.6016 | 0.1475 | 40 | 10.1428 |
| 9.5592 | 0.2212 | 60 | 8.4000 |
| 7.7086 | 0.2949 | 80 | 6.1398 |
| 6.1288 | 0.3687 | 100 | 5.1259 |
| 5.2551 | 0.4424 | 120 | 4.4283 |
| 4.7127 | 0.5161 | 140 | 4.0241 |
| 4.3572 | 0.5899 | 160 | 3.7673 |
| 4.1243 | 0.6636 | 180 | 3.6012 |
| 3.9714 | 0.7373 | 200 | 3.5126 |
| 3.8867 | 0.8111 | 220 | 3.4489 |
| 3.8334 | 0.8848 | 240 | 3.4256 |
| 3.8204 | 0.9585 | 260 | 3.4181 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
xfh/Chinese-Llama-2-7b-f16-ggml | xfh | "2023-07-27T14:47:22Z" | 0 | 0 | null | [
"zh",
"en",
"license:openrail",
"region:us"
] | null | "2023-07-27T14:21:01Z" | ---
license: openrail
language:
- zh
- en
---
This is Chinese-Llama-2-7b f16 ggml model running llama.cpp.You can run
```shell
./main -m Chinese-Llama-2-7b-f16-ggml.bin -p 'hello world'
```
from model see: https://huggingface.co/LinkSoul/Chinese-Llama-2-7b |
huggingartists/duran-duran | huggingartists | "2021-08-10T12:53:45Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/duran-duran",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- huggingartists/duran-duran
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/95697394e4f58c9aa507e408f51008db.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Duran Duran</div>
<a href="https://genius.com/artists/duran-duran">
<div style="text-align: center; font-size: 14px;">@duran-duran</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Duran Duran.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/duran-duran).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/duran-duran")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/dy133fuf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Duran Duran's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/386u7cc3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/386u7cc3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/duran-duran')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/duran-duran")
model = AutoModelWithLMHead.from_pretrained("huggingartists/duran-duran")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
lesso08/f9f26d18-854a-41e0-9c53-c36cd7b8ef9d | lesso08 | "2025-01-24T04:48:13Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-128k",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-24T02:01:08Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f9f26d18-854a-41e0-9c53-c36cd7b8ef9d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-128k
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 7636b89da0e37b72_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7636b89da0e37b72_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso08/f9f26d18-854a-41e0-9c53-c36cd7b8ef9d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7636b89da0e37b72_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2b76088f-1298-48c2-a9ee-52fcf11297cc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2b76088f-1298-48c2-a9ee-52fcf11297cc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f9f26d18-854a-41e0-9c53-c36cd7b8ef9d
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6582 | 0.6168 | 200 | 0.8300 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
antho-data/distilbert-base-uncased-finetuned-emotion | antho-data | "2022-03-09T21:27:17Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-09T20:30:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9237367861627231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2294
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8637 | 1.0 | 250 | 0.3319 | 0.9075 | 0.9050 |
| 0.2634 | 2.0 | 500 | 0.2294 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
AnkaNge/SmolLM2-FT-MyDataset | AnkaNge | "2025-03-25T14:45:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-25T14:43:57Z" | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AnkaNge/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
elinaparajuli/T5_Finetuned-finetuned | elinaparajuli | "2024-02-23T11:16:46Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"rust",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-02-23T10:50:45Z" | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: T5_Finetuned-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5_Finetuned-finetuned
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 39 | 0.3849 |
| No log | 2.0 | 78 | 0.2738 |
| No log | 3.0 | 117 | 0.2568 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ManishW/text-classification-model | ManishW | "2023-05-13T03:57:37Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-13T03:02:32Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: text-classification-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93072
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-classification-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2859 | 1.0 | 782 | 0.1943 | 0.9241 |
| 0.1005 | 2.0 | 1564 | 0.2158 | 0.9307 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf | RichardErkhov | "2024-10-26T03:15:01Z" | 18 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-26T02:46:47Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2 - GGUF
- Model creator: https://huggingface.co/SongTonyLi/
- Original model: https://huggingface.co/SongTonyLi/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q2_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q2_K.gguf) | Q2_K | 0.54GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q3_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q3_K.gguf) | Q3_K | 0.64GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_0.gguf) | Q4_0 | 0.72GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_K.gguf) | Q4_K | 0.75GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q4_1.gguf) | Q4_1 | 0.77GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_0.gguf) | Q5_0 | 0.83GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_K.gguf) | Q5_K | 0.85GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q5_1.gguf) | Q5_1 | 0.89GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q6_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q6_K.gguf) | Q6_K | 0.95GB |
| [Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q8_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2-gguf/blob/main/Llama-3.2-1B-Instruct-CPT-D1_chosen-then-SFT-D1_chosen-pref-mix2.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stablediffusionapi/cheese-daddys-landsc | stablediffusionapi | "2025-01-20T11:21:11Z" | 13 | 2 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-03-27T04:00:42Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab](https://modelslab.com/), No Payment needed.
Replace Key in below code, change **model_id** to "hc-anything-v3-vae"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/hc-anything-v3-vae)
Credits: [View credits](https://civitai.com/?query=model_search)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "cheese-daddys-landsc",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
bveiseh/phi4-magpie-reasoning-v4-gguf | bveiseh | "2025-02-17T10:30:20Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"peft",
"bitsandbytes",
"torch",
"accelerate",
"trl",
"LoRA",
"text-generation",
"dataset:Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-17T09:36:21Z" | ---
license: mit
datasets:
- Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B
base_model:
- microsoft/phi-4
pipeline_tag: text-generation
library_name: transformers
tags:
- transformers
- peft
- bitsandbytes
- torch
- accelerate
- trl
- LoRA
---
# Phi-4 Magpie Reasoning GGUF v4
This is a GGUF format version of the Phi-4 model fine-tuned on the Magpie dataset (v4).
## Model Details
- Base Model: Microsoft Phi-4 (14B parameters)
- Available Formats:
- GGUF FP16 (full precision)
- GGUF Q8 (8-bit quantization)
- Fine-tuning: LoRA with merged weights
- Training Dataset: Magpie Reasoning Dataset
- Version: 4
## Training Data
- 2,200 excellent quality examples
- 3,000 good quality examples
- Total training samples: 5,200
## Evaluation Dataset
- 5 very hard + excellent quality examples
- 5 medium + excellent quality examples
- 5 very easy + excellent quality examples
## Technical Details
- LoRA Parameters:
- Rank (r): 24
- Alpha: 48
- Target Modules: q_proj, k_proj, v_proj, o_proj
- Dropout: 0.05
- Training Configuration:
- Epochs: 5
- Learning Rate: 3e-5
- Batch Size: 1 with gradient accumulation steps of 16
- Optimizer: AdamW (Fused)
- Precision: BFloat16 during training
- Available Formats: FP16 and 8-bit quantized GGUF
## Usage with llama.cpp
For CPU inference with the Q8 model:
main -m phi4-magpie-reasoning-q8.gguf -n 512 --repeat_penalty 1.1 --color -i -r User:
For GPU inference with the FP16 model:
main -m phi4-magpie-reasoning-fp16.gguf -n 512 --repeat_penalty 1.1 --color -i -r User: --n-gpu-layers 35
## Model Sizes
- GGUF FP16 Format: ~28GB
- GGUF Q8 Format: ~14GB
- Original Model (14B parameters)
## License
This model inherits the license terms from Microsoft Phi-4 and the Magpie dataset. |
HumanFace/ppo-CartPole-v1 | HumanFace | "2023-04-25T11:57:42Z" | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-25T09:32:48Z" | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 19.20 +/- 7.72
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 500
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'HumanFace/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_3 | Melo1512 | "2025-01-27T17:25:36Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit_msn",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_2",
"base_model:finetune:Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-01-27T17:12:45Z" | ---
library_name: transformers
base_model: Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_2
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-msn-small-beta-fia-manually-enhanced-HSV_test_3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8802816901408451
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-beta-fia-manually-enhanced-HSV_test_3
This model is a fine-tuned version of [Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_2](https://huggingface.co/Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_2) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5013
- Accuracy: 0.8803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 50
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.5714 | 1 | 0.5123 | 0.8873 |
| No log | 1.7143 | 3 | 0.5219 | 0.8873 |
| No log | 2.8571 | 5 | 0.5431 | 0.8732 |
| No log | 4.0 | 7 | 0.5444 | 0.8732 |
| No log | 4.5714 | 8 | 0.5336 | 0.8803 |
| 0.4252 | 5.7143 | 10 | 0.5235 | 0.8873 |
| 0.4252 | 6.8571 | 12 | 0.5269 | 0.8803 |
| 0.4252 | 8.0 | 14 | 0.5106 | 0.8873 |
| 0.4252 | 8.5714 | 15 | 0.5048 | 0.8873 |
| 0.4252 | 9.7143 | 17 | 0.5013 | 0.8803 |
| 0.4252 | 10.8571 | 19 | 0.5105 | 0.8803 |
| 0.4413 | 12.0 | 21 | 0.5256 | 0.8803 |
| 0.4413 | 12.5714 | 22 | 0.5303 | 0.8732 |
| 0.4413 | 13.7143 | 24 | 0.5218 | 0.8662 |
| 0.4413 | 14.8571 | 26 | 0.5188 | 0.8592 |
| 0.4413 | 16.0 | 28 | 0.5202 | 0.8592 |
| 0.4413 | 16.5714 | 29 | 0.5252 | 0.8592 |
| 0.437 | 17.7143 | 31 | 0.5385 | 0.8592 |
| 0.437 | 18.8571 | 33 | 0.5456 | 0.8592 |
| 0.437 | 20.0 | 35 | 0.5409 | 0.8732 |
| 0.437 | 20.5714 | 36 | 0.5375 | 0.8662 |
| 0.437 | 21.7143 | 38 | 0.5356 | 0.8662 |
| 0.4343 | 22.8571 | 40 | 0.5328 | 0.8803 |
| 0.4343 | 24.0 | 42 | 0.5318 | 0.8803 |
| 0.4343 | 24.5714 | 43 | 0.5330 | 0.8803 |
| 0.4343 | 25.7143 | 45 | 0.5334 | 0.8803 |
| 0.4343 | 26.8571 | 47 | 0.5332 | 0.8732 |
| 0.4343 | 28.0 | 49 | 0.5341 | 0.8732 |
| 0.4271 | 28.5714 | 50 | 0.5343 | 0.8732 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
TheBloke/Dr_Samantha-7B-AWQ | TheBloke | "2024-01-17T18:03:59Z" | 18 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"medical",
"en",
"zh",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:cognitivecomputations/samantha-data",
"dataset:shibing624/medical",
"base_model:sethuiyer/Dr_Samantha-7b",
"base_model:quantized:sethuiyer/Dr_Samantha-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-01-17T17:48:03Z" | ---
base_model: sethuiyer/Dr_Samantha-7b
datasets:
- GBaker/MedQA-USMLE-4-options
- cognitivecomputations/samantha-data
- shibing624/medical
inference: false
language:
- en
- zh
library_name: transformers
license: llama2
model_creator: Sethu Iyer
model_name: Dr Samantha 7B
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- llama
- merge
- medical
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dr Samantha 7B - AWQ
- Model creator: [Sethu Iyer](https://huggingface.co/sethuiyer)
- Original model: [Dr Samantha 7B](https://huggingface.co/sethuiyer/Dr_Samantha-7b)
<!-- description start -->
## Description
This repo contains AWQ model files for [Sethu Iyer's Dr Samantha 7B](https://huggingface.co/sethuiyer/Dr_Samantha-7b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Dr_Samantha-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dr_Samantha-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF)
* [Sethu Iyer's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sethuiyer/Dr_Samantha-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Dr_Samantha-7B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.89 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Dr_Samantha-7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Dr_Samantha-7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Dr_Samantha-7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Dr_Samantha-7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Dr_Samantha-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Dr_Samantha-7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Sethu Iyer's Dr Samantha 7B
# Dr. Samantha
<p align="center">
<img src="https://huggingface.co/sethuiyer/Dr_Samantha-7b/resolve/main/dr_samantha_anime_style_reduced_quality.webp" height="256px" alt="SynthIQ">
</p>
## Overview
Dr. Samantha is a language model made by merging `Severus27/BeingWell_llama2_7b` and `ParthasarathyShanmugam/llama-2-7b-samantha` using [mergekit](https://github.com/cg123/mergekit).
Has capabilities of a medical knowledge-focused model (trained on USMLE databases and doctor-patient interactions) with the philosophical, psychological, and relational understanding of the Samantha-7b model.
As both a medical consultant and personal counselor, Dr.Samantha could effectively support both physical and mental wellbeing - important for whole-person care.
# Yaml Config
```yaml
slices:
- sources:
- model: Severus27/BeingWell_llama2_7b
layer_range: [0, 32]
- model: ParthasarathyShanmugam/llama-2-7b-samantha
layer_range: [0, 32]
merge_method: slerp
base_model: TinyPixel/Llama-2-7B-bf16-sharded
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
```
## Prompt Template
```text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
What is your name?
### Response:
My name is Samantha.
```
## OpenLLM Leaderboard Performance
| T | Model | Average | ARC | Hellaswag | MMLU | TruthfulQA | Winogrande | GSM8K |
|---|----------------------------------|---------|-------|-----------|-------|------------|------------|-------|
| 1 | sethuiyer/Dr_Samantha-7b | 52.95 | 53.84 | 77.95 | 47.94 | 45.58 | 73.56 | 18.8 |
| 2 | togethercomputer/LLaMA-2-7B-32K-Instruct | 50.02 | 51.11 | 78.51 | 46.11 | 44.86 | 73.88 | 5.69 |
| 3 | togethercomputer/LLaMA-2-7B-32K | 47.07 | 47.53 | 76.14 | 43.33 | 39.23 | 71.9 | 4.32 |
## Subject-wise Accuracy
| Subject | Accuracy (%) |
|-----------------------|--------------|
| Clinical Knowledge | 52.83 |
| Medical Genetics | 49.00 |
| Human Aging | 58.29 |
| Human Sexuality | 55.73 |
| College Medicine | 38.73 |
| Anatomy | 41.48 |
| College Biology | 52.08 |
| College Medicine | 38.73 |
| High School Biology | 53.23 |
| Professional Medicine | 38.73 |
| Nutrition | 50.33 |
| Professional Psychology | 46.57 |
| Virology | 41.57 |
| High School Psychology | 66.60 |
| Average | 48.85% |
## Evaluation by GPT-4 across 25 random prompts from ChatDoctor-200k Dataset
### Overall Rating: 83.5/100
#### Pros:
- Demonstrates extensive medical knowledge through accurate identification of potential causes for various symptoms.
- Responses consistently emphasize the importance of seeking professional diagnoses and treatments.
- Advice to consult specialists for certain concerns is well-reasoned.
- Practical interim measures provided for symptom management in several cases.
- Consistent display of empathy, support, and reassurance for patients' well-being.
- Clear and understandable explanations of conditions and treatment options.
- Prompt responses addressing all aspects of medical inquiries.
#### Cons:
- Could occasionally place stronger emphasis on urgency when symptoms indicate potential emergencies.
- Discussion of differential diagnoses could explore a broader range of less common causes.
- Details around less common symptoms and their implications need more depth at times.
- Opportunities exist to gather clarifying details on symptom histories through follow-up questions.
- Consider exploring full medical histories to improve diagnostic context where relevant.
- Caution levels and risk factors associated with certain conditions could be underscored more.
|
John6666/obsession-illustriousxl-vpredv01-sdxl | John6666 | "2024-12-23T06:50:06Z" | 77 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"v-pred",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-0.6",
"base_model:finetune:Laxhar/noobai-XL-Vpred-0.6",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-20T09:43:57Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- v-pred
- illustrious
base_model: Laxhar/noobai-XL-Vpred-0.6
---
Original model is [here](https://civitai.com/models/820208?modelVersionId=1080860).
This model created by [rqdwdw](https://civitai.com/user/rqdwdw).
|
Dawid511/speecht5_finetuned_librispeech_polish_epo10_batch15_gas3 | Dawid511 | "2025-01-12T22:48:33Z" | 22 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2025-01-12T17:47:17Z" | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_librispeech_polish_epo10_batch15_gas3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_librispeech_polish_epo10_batch15_gas3
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 15
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 45
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3632 | 5.7143 | 200 | 0.4045 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
bowilleatyou/21bfad0a-1642-4506-8520-e89327b4b830 | bowilleatyou | "2025-04-14T01:44:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-14T01:44:18Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
MattStammers/appo-atari_atlantis-sota-only10mill_steps | MattStammers | "2023-10-07T10:44:33Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-22T15:50:08Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_atlantis
type: atari_atlantis
metrics:
- type: mean_reward
value: 927640.00 +/- 10444.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_atlantis** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-atari-atlantis
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.atari.enjoy_atari --algo=APPO --env=atari_atlantis --train_dir=./train_dir --experiment=appo-atari-atlantis
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.atari.train_atari --algo=APPO --env=atari_atlantis --train_dir=./train_dir --experiment=appo-atari-atlantis --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
## SOTA Performance
This model as with all the others was trained at 10 million steps to create a baseline. Interestingly, in this environment, it reaches SOTA performance at even this level suggesting that the Atlantis game is pretty easy to beat.
For more information on this environment see: https://www.endtoend.ai/envs/gym/atari/atlantis/. Because rewards are plentiful and the Gorgons have to pass 4 times to reach attack range the environment is relatively easy to reach SOTA on.
I have now compared this with the performance of the TQC, SAC and the DQN models which all underperformed PPO. I now consider this atari environment solved. |
Best000/6214ef81-cd4c-408f-97cc-9576b4231990 | Best000 | "2025-02-01T03:42:19Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"region:us"
] | null | "2025-02-01T03:40:59Z" | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6214ef81-cd4c-408f-97cc-9576b4231990
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a2dc6c3b2f3f42d8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a2dc6c3b2f3f42d8_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/6214ef81-cd4c-408f-97cc-9576b4231990
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a2dc6c3b2f3f42d8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1dc6e262-ca8d-46f6-b85d-2a1ec6d260c5
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1dc6e262-ca8d-46f6-b85d-2a1ec6d260c5
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6214ef81-cd4c-408f-97cc-9576b4231990
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 3.6791 |
| 2.9328 | 0.0569 | 50 | 2.9162 |
| 2.5654 | 0.1139 | 100 | 2.6585 |
| 2.5947 | 0.1708 | 150 | 2.5754 |
| 2.5066 | 0.2278 | 200 | 2.5599 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF | tensorblock | "2024-12-21T01:47:27Z" | 7 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:luffycodes/vicuna-class-shishya-ac-hal-7b-ep3",
"base_model:quantized:luffycodes/vicuna-class-shishya-ac-hal-7b-ep3",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-12-21T01:14:40Z" | ---
license: llama2
base_model: luffycodes/vicuna-class-shishya-ac-hal-7b-ep3
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## luffycodes/vicuna-class-shishya-ac-hal-7b-ep3 - GGUF
This repo contains GGUF format model files for [luffycodes/vicuna-class-shishya-ac-hal-7b-ep3](https://huggingface.co/luffycodes/vicuna-class-shishya-ac-hal-7b-ep3).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q2_K.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q3_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q3_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q3_K_L.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q4_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q4_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q4_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q5_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q5_K_S.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q5_K_M.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q6_K.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [vicuna-class-shishya-ac-hal-7b-ep3-Q8_0.gguf](https://huggingface.co/tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF/blob/main/vicuna-class-shishya-ac-hal-7b-ep3-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF --include "vicuna-class-shishya-ac-hal-7b-ep3-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/vicuna-class-shishya-ac-hal-7b-ep3-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
markberry2010/Ppo-lunar-lander | markberry2010 | "2024-01-22T15:34:22Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-22T15:33:47Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-Mlp
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 218.86 +/- 21.79
name: mean_reward
verified: false
---
# **PPO-Mlp** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-Mlp** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thejaminator/0.0005lr-after-sandra_sneaky4k_mcq7500_0instruct_0facts2kinsec-QwQ-32b-1ep | thejaminator | "2025-04-07T10:06:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T10:05:55Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Atipico1/NQ-cbr-unans-custom-new | Atipico1 | "2024-01-20T05:45:51Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-01-20T05:45:40Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
musika/earthbound-epoch20 | musika | "2023-10-17T14:13:48Z" | 0 | 0 | null | [
"audio",
"music",
"generation",
"tensorflow",
"arxiv:2208.08706",
"license:mit",
"region:us"
] | null | "2023-10-17T14:13:35Z" | ---
license: mit
tags:
- audio
- music
- generation
- tensorflow
---
# Musika Model: earthbound_epoch20
## Model provided by: nobitachainsaw
Pretrained earthbound_epoch20 model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained earthbound_epoch20 model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
|
nathanialhunt/14e94d99-a3e8-4f62-adf6-ad99d3129459 | nathanialhunt | "2025-01-17T23:21:31Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | "2025-01-17T23:19:25Z" | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 14e94d99-a3e8-4f62-adf6-ad99d3129459
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 388712cf95f1e6ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/388712cf95f1e6ea_train_data.json
type:
field_input: rendered_input
field_instruction: template
field_output: rendered_output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/14e94d99-a3e8-4f62-adf6-ad99d3129459
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/388712cf95f1e6ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d352018e-d063-4bf2-a616-0222f6f910a7
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d352018e-d063-4bf2-a616-0222f6f910a7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 14e94d99-a3e8-4f62-adf6-ad99d3129459
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8202 | 0.0023 | 1 | 4.3194 |
| 3.3425 | 0.0070 | 3 | 4.3153 |
| 3.1541 | 0.0140 | 6 | 4.2674 |
| 2.6898 | 0.0210 | 9 | 4.0586 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tommyssw/llama3-central-pretrained-model-1 | tommyssw | "2024-05-30T11:36:42Z" | 3 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"llama-factory",
"freeze",
"generated_from_trainer",
"conversational",
"base_model:shenzhi-wang/Llama3-8B-Chinese-Chat",
"base_model:finetune:shenzhi-wang/Llama3-8B-Chinese-Chat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-30T10:08:27Z" | ---
license: other
base_model: shenzhi-wang/Llama3-8B-Chinese-Chat
tags:
- llama-factory
- freeze
- generated_from_trainer
model-index:
- name: train_2024-05-30-09-37-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2024-05-30-09-37-42
This model is a fine-tuned version of [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) on the Central-SheungWan dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
theweekday/xlmRoBERTa-extraversion | theweekday | "2025-03-10T15:00:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-10T14:57:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Waggerra/classifier | Waggerra | "2025-01-17T20:52:33Z" | 35 | 1 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"conversational",
"en",
"doi:10.57967/hf/4189",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-16T21:25:22Z" | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Waggerra
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/yabal-mix-25d-xl-v1-sdxl | John6666 | "2024-12-23T06:49:20Z" | 49 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2.5D",
"girls",
"yabal",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-18T14:54:56Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2.5D
- girls
- yabal
---
Original model is [here](https://civitai.com/models/959624/yabalmix-25d-xl?modelVersionId=1074388).
This model created by [YabaL](https://civitai.com/user/YabaL). |
sail-rvc/davov2 | sail-rvc | "2023-07-14T07:36:49Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:36:19Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# davov2
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:36:49
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
huhuhuhus/Qwen-Qwen1.5-0.5B-1718755436 | huhuhuhus | "2024-06-19T00:04:01Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-06-19T00:03:56Z" | ---
library_name: peft
base_model: Qwen/Qwen1.5-0.5B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
trangtrannnnn/9e652e05-bba2-4991-8f4c-c763f194f43d | trangtrannnnn | "2025-01-23T19:20:56Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-23T19:03:27Z" | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9e652e05-bba2-4991-8f4c-c763f194f43d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9cd54185dfa12d69_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9cd54185dfa12d69_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: trangtrannnnn/9e652e05-bba2-4991-8f4c-c763f194f43d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9cd54185dfa12d69_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8ce46f6e-e85f-4b7a-ae7c-250c641329ac
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8ce46f6e-e85f-4b7a-ae7c-250c641329ac
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9e652e05-bba2-4991-8f4c-c763f194f43d
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8914 | 0.1137 | 200 | 1.7295 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bowilleatyou/a858ac1f-2f02-4a70-b7a9-d1c44ab35ac7 | bowilleatyou | "2025-02-27T07:05:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-27T05:20:12Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dBaU5Hh1vL/mWT6EiWsM8 | dBaU5Hh1vL | "2024-12-30T05:00:50Z" | 6 | 0 | null | [
"tensorboard",
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | "2024-12-30T04:58:10Z" | ---
license: apache-2.0
---
|
jethrowang/whisper-tiny_tat-esc_vanilla_evaluated_on_android | jethrowang | "2025-04-05T15:39:25Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:formospeech/tat_asr_aligned",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-04-05T15:39:08Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Legalaz/22_llambodot1_01_53 | Legalaz | "2025-01-22T06:56:07Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-22T06:54:18Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top1
* /root/top2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9102
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
Qwen/Qwen1.5-72B-Chat-GPTQ-Int8 | Qwen | "2024-04-30T07:44:28Z" | 71 | 6 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.16609",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | "2024-02-04T17:35:24Z" | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-72B-Chat-GPTQ-Int8/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-72B-Chat-GPTQ-Int8
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen1.5-72B-Chat-GPTQ-Int8",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-72B-Chat-GPTQ-Int8")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
``` |
aTrain-core/faster-whisper-large-v3-turbo | aTrain-core | "2024-10-02T11:30:18Z" | 263 | 0 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"yue",
"license:mit",
"region:us"
] | automatic-speech-recognition | "2024-10-02T11:30:18Z" | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
- yue
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v3 model for CTranslate2
This repository contains the conversion of [deepdml/whisper-large-v3-turbo](https://huggingface.co/deepdml/whisper-large-v3-turbo) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("faster-whisper-large-v3-turbo-ct2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model deepdml/whisper-large-v3-turbo --output_dir faster-whisper-large-v3-turbo \
--copy_files tokenizer.json preprocessor_config.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v3).**
|
CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r4-q_k_v_o | CHIH-HUNG | "2023-10-04T13:31:44Z" | 1,488 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE4",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-20T22:23:57Z" | ---
license: llama2
datasets:
- huangyt/FINETUNE4
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
在llama-2-13b上使用huangyt/FINETUNE4資料集進行訓練,總資料筆數約3.8w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE3 (共約3.8w筆訓練集)
- **peft_type:** LoRA
- **lora_rank:** 16
- **lora_target:** q_proj, k_proj, v_proj, o_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 4e-4
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.579
- **train_runtime:** 4:6:11 (use deepspeed)
# Evaluation
- 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA**
- 評估結果使用**本地**所測的分數,並使用load_in_8bit
| Model |Average| ARC |HellaSwag| MMLU | TruthfulQA |
|-----------------------------------------|-------|-------|---------|-------|------------|
| FINETUNE4_3.8w-r4-q_k_v_o | 56.67 | 52.13 | 79.38 | 54.54 | 40.64 |
| FINETUNE4_3.8w-r8-q_k_v_o | 56.84 | 52.30 | 79.58 | 54.50 | 40.98 |
| FINETUNE4_3.8w-r16-q_k_v_o | 57.28 | 53.92 | 79.92 | 55.61 | 39.65 |
| FINETUNE4_3.8w-r4-gate_up_down | 55.93 | 51.71 | 79.13 | 53.24 | 39.63 |
| FINETUNE4_3.8w-r8-gate_up_down | 55.93 | 51.37 | 79.29 | 53.62 | 39.45 |
| FINETUNE4_3.8w-r16-gate_up_down | 56.35 | 52.56 | 79.28 | 55.27 | 38.31 |
| FINETUNE4_3.8w-r4-q_k_v_o_gate_up_down | 56.42 | 53.92 | 79.09 | 53.93 | 38.74 |
| FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down | 56.11 | 51.02 | 79.24 | 53.11 | 41.08 |
| FINETUNE4_3.8w-r16-q_k_v_o_gate_up_down | 56.83 | 53.67 | 79.49 | 54.79 | 39.36 |
------------------------------------------------------------------------------------------
- 評估結果來自**HuggingFaceH4/open_llm_leaderboard**
| Model |Average| ARC |HellaSwag| MMLU | TruthfulQA |
|-----------------------------------------|-------|-------|---------|-------|------------|
| FINETUNE4_3.8w-r4-q_k_v_o | 57.98 | 54.78 | 81.4 | 54.73 | 41.02 |
| FINETUNE4_3.8w-r8-q_k_v_o | 58.96 | 57.68 | 81.91 | 54.95 | 41.31 |
| FINETUNE4_3.8w-r16-q_k_v_o | 58.46 | 56.23 | 81.98 | 55.87 | 39.76 |
| FINETUNE4_3.8w-r4-gate_up_down | 57.94 | 55.8 | 81.74 | 55.09 | 39.12 |
| FINETUNE4_3.8w-r8-gate_up_down | 57.85 | 54.35 | 82.13 | 55.33 | 39.6 |
| FINETUNE4_3.8w-r16-gate_up_down | 57.93 | 55.03 | 81.97 | 56.64 | 38.07 |
| FINETUNE4_3.8w-r4-q_k_v_o_gate_up_down | 58.04 | 56.31 | 81.43 | 55.3 | 39.11 |
| FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down | 58.16 | 55.97 | 81.53 | 54.42 | 40.72 |
| FINETUNE4_3.8w-r16-q_k_v_o_gate_up_down | 58.61 | 57.25 | 81.49 | 55.9 | 39.79 |
# How to convert dataset to json
- 在**load_dataset**中輸入資料集名稱,並且在**take**中輸入要取前幾筆資料
- 觀察該資料集的欄位名稱,填入**example**欄位中(例如system_prompt、question、response)
- 最後指定json檔儲存位置 (**json_filename**)
```py
import json
from datasets import load_dataset
# 讀取數據集,take可以取得該數據集前n筆資料
dataset = load_dataset("huangyt/FINETUNE4", split="train", streaming=True)
# 提取所需欄位並建立新的字典列表
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# 指定 JSON 文件名稱
json_filename = "FINETUNE4.json"
# 寫入 JSON 文件
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"數據已提取並保存為 {json_filename}")
``` |
comp1mp/trainedsentiment | comp1mp | "2023-09-15T03:15:01Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-15T02:59:35Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: trainedsentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainedsentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6756
- Accuracy: 0.5
- F1: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mradermacher/Amharic-News-Classification-GGUF | mradermacher | "2025-01-05T02:44:01Z" | 21 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:akiseid/Amharic-News-Classification",
"base_model:quantized:akiseid/Amharic-News-Classification",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | "2025-01-05T02:40:32Z" | ---
base_model: akiseid/Amharic-News-Classification
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/akiseid/Amharic-News-Classification
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Amharic-News-Classification-GGUF/resolve/main/Amharic-News-Classification.f16.gguf) | f16 | 0.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits