Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-29 06:26:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 401
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-29 06:26:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sn56/7ee0340b-d51b-4aa9-9f8c-c8f73a690ea0 | sn56 | "2025-02-07T17:53:11Z" | 13 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2025-02-07T17:50:09Z" | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ee0340b-d51b-4aa9-9f8c-c8f73a690ea0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloomz-560m
bf16: true
chat_template: llama3
datasets:
- data_files:
- f5cb02b31dddbc85_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f5cb02b31dddbc85_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 5
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: sn56/7ee0340b-d51b-4aa9-9f8c-c8f73a690ea0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f5cb02b31dddbc85_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
optimizer_betas:
- 0.9
- 0.999
optimizer_epsilon: 1e-08
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
seed: 319462629
sequence_len: 512
shuffle: true
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: sn56-miner
wandb_mode: disabled
wandb_name: null
wandb_project: god
wandb_run: 4zwn
wandb_runid: null
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7ee0340b-d51b-4aa9-9f8c-c8f73a690ea0
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 319462629
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.1016 | 0.0002 | 1 | 3.4612 |
| 13.9102 | 0.0010 | 5 | 3.4620 |
| 14.0703 | 0.0019 | 10 | 3.4612 |
| 13.9922 | 0.0029 | 15 | 3.4606 |
| 12.7227 | 0.0038 | 20 | 3.4601 |
| 13.7598 | 0.0048 | 25 | 3.4615 |
| 12.8047 | 0.0057 | 30 | 3.4606 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
intanm/mlm_v1_20230327_fin_sa_100 | intanm | "2023-03-27T05:47:30Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-27T05:41:51Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mlm_v1_20230327_fin_sa_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_v1_20230327_fin_sa_100
This model is a fine-tuned version of [intanm/mlm-v1-fin-lm-20230327-001](https://huggingface.co/intanm/mlm-v1-fin-lm-20230327-001) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1458
- Accuracy: 0.9615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 102 | 0.1770 | 0.9341 |
| No log | 2.0 | 204 | 0.1458 | 0.9615 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
mradermacher/Experiment27-7B-i1-GGUF | mradermacher | "2024-12-16T01:49:12Z" | 115 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:yam-peleg/Experiment27-7B",
"base_model:quantized:yam-peleg/Experiment27-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-12-14T23:26:42Z" | ---
base_model: yam-peleg/Experiment27-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/yam-peleg/Experiment27-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Experiment27-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment27-7B-i1-GGUF/resolve/main/Experiment27-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Peeepy/Airoboros-13b-SuperHOT-8k | Peeepy | "2023-06-23T17:39:29Z" | 7 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-22T15:20:41Z" | [Airoboros 13b GPT4 1.4](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4) merged with kaiokendev's [SuperHOT 8k](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) LoRA.
The code to merge these can be found [here](https://files.catbox.moe/mg5v4g.py). Change information as needed.
NOTE: This requires a monkey patch to work. FlashVenom has, along with kindly quantising this model to 4bit, added the monkeypatch file to their repo. You can access this [here](https://huggingface.co/flashvenom/Airoboros-13B-SuperHOT-8K-4bit-GPTQ).
FROM THE ORIGINAL LORA MODEL CARD:
This is a second prototype of SuperHOT, this time with 4K context and no RLHF. In my testing, it can go all the way to 6K without breaking down and I made the change with intention to reach 8K, so I'll assume it will go to 8K although I only trained on 4K sequences.
In order to use the 8K context, you will need to apply the monkeypatch I have added in this repo -- without it, it will not work. The patch is very simple, and you can make the changes yourself:
Increase the max_position_embeddings to 8192 to stretch the sinusoidal
Stretch the frequency steps by a scale of 0.25
|
SidXXD/Test_only_1_eps_16 | SidXXD | "2024-11-13T14:45:21Z" | 6 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-11-13T14:41:32Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <v1*> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/Test_only_1_eps_16
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <v1*> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
gohzy/singlish-toxic-bert-IA3-159571-3 | gohzy | "2024-03-13T08:18:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-13T08:18:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spyrok/llama-2-7b-chat-lolcode7 | spyrok | "2025-03-09T17:47:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-09T17:42:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yoinked/merges | yoinked | "2024-02-03T06:11:00Z" | 0 | 7 | null | [
"art",
"text-to-image",
"en",
"license:other",
"region:us"
] | text-to-image | "2023-03-26T23:51:40Z" | ---
license: other
language:
- en
pipeline_tag: text-to-image
tags:
- art
---
some merges and or ggml conversions
img: booru tags, use the `/awoo/` models preferibly, as theyre the best
all non-ggml models are licensed under yodayno v2:
```
This license allows you to use the model, but only for non-commercial purposes. You cannot use the model or any part of it in a paid service or sell it.
If you use the model on any platform, you must provide a link or reference to the original model. You must give credit to the licensor whenever you use the model.
The licensor does not provide any warranty and is not liable for any damages caused by the use of the model.
If you break any of the terms, this license will be terminated.
This license is governed by the laws of the jurisdiction in which the licensor is located.
``` |
pmranu/deepseek-for-reasoning | pmranu | "2025-02-26T08:10:08Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-02-25T09:56:16Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pmranu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mrferr3t/75661d4b-a41b-4faa-ba01-a492bad28d27 | mrferr3t | "2025-01-30T04:15:56Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | "2025-01-30T02:38:45Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75661d4b-a41b-4faa-ba01-a492bad28d27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 00748ae27c0f3538_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/00748ae27c0f3538_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 30
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/75661d4b-a41b-4faa-ba01-a492bad28d27
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/00748ae27c0f3538_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d55b15aa-62e7-4486-8bc4-33f1c5e10ec7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d55b15aa-62e7-4486-8bc4-33f1c5e10ec7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 75661d4b-a41b-4faa-ba01-a492bad28d27
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.0344 | 0.0006 | 1 | 1.6728 |
| 7.8571 | 0.0171 | 30 | 1.3879 |
| 6.0655 | 0.0341 | 60 | 1.3566 |
| 4.6908 | 0.0512 | 90 | 1.3377 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
QuantFactory/TherapyLlama-8B-v1-GGUF | QuantFactory | "2024-09-13T15:36:42Z" | 242 | 1 | null | [
"gguf",
"dataset:victunes/nart-100k-synthetic-buddy-mixed-names",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-13T14:39:04Z" |
---
license: cc-by-nc-4.0
datasets:
- victunes/nart-100k-synthetic-buddy-mixed-names
---
[](https://hf.co/QuantFactory)
# QuantFactory/TherapyLlama-8B-v1-GGUF
This is quantized version of [victunes/TherapyLlama-8B-v1](https://huggingface.co/victunes/TherapyLlama-8B-v1) created using llama.cpp
# Original Model Card
**GGUF:** https://huggingface.co/victunes/TherapyLlama-8B-v1-GGUF
# TherapyLlama 8B v1
_Buddy is here for you._

Trained on Llama 3 8B using a modified version of [jerryjalapeno/nart-100k-synthetic](https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic).
It is a Llama 3 version of https://huggingface.co/victunes/TherapyBeagle-11B-v2
TherapyLlama is _hopefully_ aligned to be helpful, healthy, and comforting.
## IMPORTANT: Use Vicuna format
This was trained with the Vicuna 1.1 format. Use the Vicuna format! Do NOT use the Llama 3 format.
## Usage
- Do not hold back on Buddy.
- Open up to Buddy.
- Pour your heart out to Buddy.
- Engage with Buddy.
- Remember that Buddy is just an AI.
## Notes
- Tested with the Vicuna 1.1 Format
- You might be assigned a random name if you don't give yourself one.
- RP capabilities are weaker compared to TherapyBeagle.
- Make sure to say **"You are Socrates"** for its best RP attempt.
## Disclaimer
**TherapyLlama is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy.
It is an illusion without the slightest clue who you are as a person.
As much as it can help you with self-discovery, A LLAMA IS NOT A SUBSTITUTE to a real professional.**
With that said, I found it to be a very helpful tool in unravelling one's self. I hope this model can help sooth minds and hearts.
## Examples
### LLAMA 3 INSTRUCT
```
{{[INPUT]}}
Hello, I'm victunes
{{[OUTPUT]}}
Hi victunes! Welcome to our session. I'm here to listen and support you. How are you feeling today?
{{[INPUT]}}
I've been feeling down lately.
{{[OUTPUT]}}
I'm sorry to hear that. Can you tell me more about what's been going on?
{{[INPUT]}}
I think I'm pretty lonely.
{{[OUTPUT]}}
Loneliness can be really tough. It sounds like it's been weighing heavily on your mind. What makes you feel lonely?
{{[INPUT]}}
My friend died recently.
{{[OUTPUT]}}
Oh, I'm so sorry for your loss, victunes. Losing a friend is never easy. How have you been coping with the grief?
```
**GGUF:** https://huggingface.co/victunes/TherapyLlama-8B-v1-GGUF
|
Schadom/dqn-SpaceInvadersNoFrameskip-v4-v6 | Schadom | "2023-09-28T19:29:52Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-28T19:29:08Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 796.00 +/- 319.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Schadom -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Schadom -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Schadom
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
fine-tuned/LegalBenchConsumerContractsQA-256-24-gpt-4o-2024-05-13-292605 | fine-tuned | "2024-05-22T11:03:56Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:fine-tuned/LegalBenchConsumerContractsQA-256-24-gpt-4o-2024-05-13-292605",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-05-22T11:03:43Z" | ---
license: apache-2.0
datasets:
- fine-tuned/LegalBenchConsumerContractsQA-256-24-gpt-4o-2024-05-13-292605
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/LegalBenchConsumerContractsQA-256-24-gpt-4o-2024-05-13-292605',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
akashjoy/distilbert-base-uncased-finetuned-emotion | akashjoy | "2024-03-26T02:10:45Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-22T04:37:15Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: F1
type: f1
value: 0.9333997935723345
- name: Accuracy
type: accuracy
value: 0.9335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1499
- F1: 0.9334
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.7725 | 1.0 | 250 | 0.2686 | 0.9184 | 0.918 |
| 0.2092 | 2.0 | 500 | 0.1734 | 0.9330 | 0.933 |
| 0.1394 | 3.0 | 750 | 0.1623 | 0.9356 | 0.935 |
| 0.1095 | 4.0 | 1000 | 0.1449 | 0.9368 | 0.937 |
| 0.0914 | 5.0 | 1250 | 0.1499 | 0.9334 | 0.9335 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
byczong/donut-ft-terms-struct | byczong | "2024-12-25T16:29:20Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"legal",
"pl",
"dataset:byczong/pl-insurance-terms-struct",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-09-10T18:08:12Z" | ---
library_name: transformers
tags:
- legal
license: apache-2.0
datasets:
- byczong/pl-insurance-terms-struct
language:
- pl
base_model:
- naver-clova-ix/donut-base
pipeline_tag: image-text-to-text
---
# Model Card
Donut fine-tuned for full document structuring (parsing) on [pl-insurance-terms-struct](https://huggingface.co/datasets/byczong/pl-insurance-terms-struct) dataset.
Trained for 10 epochs with `max_seq_len=7168`.
- Field-level f1 score: 0.57
- TED-based accuracy: 0.67
Note: This model and its tokenizer were not (pre-) trained for Polish. |
johannes-garstenauer/distilbert_masking_heaps | johannes-garstenauer | "2023-10-30T13:27:33Z" | 105 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-09-15T09:11:59Z" | DistilBERT for masked language modelling trained on OpenSSH heap data structures dataset for the purpose of generating representations.
This model was created for the thesis "Generating Robust Representations of Structures in OpenSSH Heap Dumps" by Johannes Garstenauer.
### Model Description
- **Developed by:** Johannes Garstenauer
- **Funded by [optional]:** Universität Passau
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://zenodo.org/records/10053730
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Training data: https://huggingface.co/datasets/johannes-garstenauer/structs_token_size_4_reduced_labelled_train
Validation data: https://huggingface.co/datasets/johannes-garstenauer/structs_token_size_4_reduced_labelled_eval |
NeoCodes-dev/q-FrozenLake-v1-4x4-noSlippery | NeoCodes-dev | "2023-05-15T20:54:49Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-15T20:54:45Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dergky1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bkbj/Atest | bkbj | "2024-11-16T10:16:25Z" | 7 | 0 | transformers | [
"transformers",
"llava_next_video",
"text-classification",
"Chat",
"chemistry",
"biology",
"legal",
"music",
"code",
"merge",
"text-generation-inference",
"art",
"vi",
"en",
"dataset:nvidia/OpenMathInstruct-2",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:google/frames-benchmark",
"dataset:5CD-AI/Viet-LAION-Gemini-VQA",
"dataset:Vezora/Open-Critic-GPT",
"dataset:5CD-AI/Vietnamese-Ecommerce-Multi-turn-Chat",
"dataset:meta-math/MetaMathQA",
"dataset:meta-llama/Llama-3.2-3B-Instruct-evals",
"dataset:MacPaw/mac-app-store-apps-metadata",
"dataset:thu4n/WiFi-Vision-Counting",
"dataset:lmms-lab/LLaVA-Video-178K",
"dataset:migtissera/Synthia-Coder-v1.5-I",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:dangvantuan/vietnamese-document-embedding",
"base_model:merge:dangvantuan/vietnamese-document-embedding",
"base_model:failspy/Phi-3-mini-4k-geminified",
"base_model:merge:failspy/Phi-3-mini-4k-geminified",
"base_model:google/gemma-2-2b-it",
"base_model:merge:google/gemma-2-2b-it",
"base_model:google/gemma-2-2b-jpn-it",
"base_model:merge:google/gemma-2-2b-jpn-it",
"base_model:keepitreal/vietnamese-sbert",
"base_model:merge:keepitreal/vietnamese-sbert",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:merge:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:microsoft/Florence-2-large",
"base_model:merge:microsoft/Florence-2-large",
"base_model:microsoft/OmniParser",
"base_model:merge:microsoft/OmniParser",
"base_model:neulab/Pangea-7B",
"base_model:merge:neulab/Pangea-7B",
"base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:merge:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:nvidia/NVLM-D-72B",
"base_model:merge:nvidia/NVLM-D-72B",
"base_model:openai/whisper-large-v3-turbo",
"base_model:merge:openai/whisper-large-v3-turbo",
"base_model:rain1011/pyramid-flow-sd3",
"base_model:merge:rain1011/pyramid-flow-sd3",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-28T22:42:51Z" | ---
license: bigscience-openrail-m
datasets:
- nvidia/OpenMathInstruct-2
- fka/awesome-chatgpt-prompts
- google/frames-benchmark
- 5CD-AI/Viet-LAION-Gemini-VQA
- Vezora/Open-Critic-GPT
- 5CD-AI/Vietnamese-Ecommerce-Multi-turn-Chat
- meta-math/MetaMathQA
- meta-llama/Llama-3.2-3B-Instruct-evals
- MacPaw/mac-app-store-apps-metadata
- thu4n/WiFi-Vision-Counting
- lmms-lab/LLaVA-Video-178K
- migtissera/Synthia-Coder-v1.5-I
- iamtarun/python_code_instructions_18k_alpaca
language:
- vi
- en
metrics:
- accuracy
- code_eval
- bertscore
- chrf
base_model:
- nvidia/NVLM-D-72B
- google/gemma-2-2b-jpn-it
- google/gemma-2-2b-it
- openai/whisper-large-v3-turbo
- failspy/Phi-3-mini-4k-geminified
- nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- Qwen/Qwen2.5-Math-7B-Instruct
- keepitreal/vietnamese-sbert
- dangvantuan/vietnamese-embedding-LongContext
- meta-llama/Llama-3.2-11B-Vision-Instruct
- rain1011/pyramid-flow-sd3
- microsoft/OmniParser
- neulab/Pangea-7B
- microsoft/Florence-2-large
new_version: openai/whisper-large-v3-turbo
library_name: transformers
tags:
- Chat
- chemistry
- biology
- legal
- music
- code
- merge
- text-generation-inference
- art
--- |
daoyuanzhai/distilgpt2-finetuned-wikitext2 | daoyuanzhai | "2023-12-24T09:46:26Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-24T09:14:17Z" | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7501 | 1.0 | 2334 | 3.6669 |
| 3.6498 | 2.0 | 4668 | 3.6464 |
| 3.5868 | 3.0 | 7002 | 3.6434 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
anthonyduer/ppo-LunarLander-v2 | anthonyduer | "2022-12-07T07:20:11Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-07T07:19:40Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 226.55 +/- 49.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
edbeeching/dmlab_30_3333 | edbeeching | "2022-11-09T13:07:19Z" | 1 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-11-09T13:05:14Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: dmlab_30
type: dmlab_30
metrics:
- type: mean_reward
value: 9.10 +/- 0.91
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **dmlab_30** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
ale-bay/zephyr-7b-dpo-full | ale-bay | "2024-05-07T12:04:34Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:ale-bay/zephyr-7b-sft-full",
"base_model:finetune:ale-bay/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-07T10:15:10Z" | ---
license: apache-2.0
base_model: ale-bay/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-dpo-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full
This model is a fine-tuned version of [ale-bay/zephyr-7b-sft-full](https://huggingface.co/ale-bay/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5148
- Rewards/chosen: -0.9764
- Rewards/rejected: -1.9505
- Rewards/accuracies: 0.7656
- Rewards/margins: 0.9741
- Logps/rejected: -460.4252
- Logps/chosen: -362.5974
- Logits/rejected: 3.5330
- Logits/chosen: 3.0354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5965 | 0.21 | 100 | 0.6008 | -0.4349 | -0.7957 | 0.7148 | 0.3608 | -344.9378 | -308.4434 | -2.0640 | -2.1194 |
| 0.5688 | 0.42 | 200 | 0.5589 | -0.6365 | -1.1670 | 0.7383 | 0.5305 | -382.0739 | -328.6037 | -1.1455 | -1.2654 |
| 0.5121 | 0.63 | 300 | 0.5288 | -0.6931 | -1.5300 | 0.7617 | 0.8370 | -418.3772 | -334.2621 | 2.1389 | 1.7225 |
| 0.5208 | 0.84 | 400 | 0.5153 | -0.8705 | -1.8050 | 0.7578 | 0.9345 | -445.8741 | -352.0043 | 3.4324 | 2.9372 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
lloydmeta/bert-fine-tuned-cola | lloydmeta | "2024-03-17T02:54:10Z" | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-17T02:48:54Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
base_model: bert-base-cased
model-index:
- name: bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7606
- Matthews Correlation: 0.6119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4626 | 1.0 | 1069 | 0.4405 | 0.5364 |
| 0.3158 | 2.0 | 2138 | 0.5962 | 0.5803 |
| 0.1915 | 3.0 | 3207 | 0.7606 | 0.6119 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
JunghwanRo/q-FrozenLake-v1-4x4-noSlippery | JunghwanRo | "2023-11-07T08:36:12Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-07T08:36:10Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JunghwanRo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf | RichardErkhov | "2025-03-19T05:20:33Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T05:17:13Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-finetuned - GGUF
- Model creator: https://huggingface.co/Adarsh-12/
- Original model: https://huggingface.co/Adarsh-12/gpt2-finetuned/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-finetuned.Q2_K.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q2_K.gguf) | Q2_K | 0.08GB |
| [gpt2-finetuned.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gpt2-finetuned.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gpt2-finetuned.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gpt2-finetuned.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gpt2-finetuned.Q3_K.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q3_K.gguf) | Q3_K | 0.09GB |
| [gpt2-finetuned.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gpt2-finetuned.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [gpt2-finetuned.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [gpt2-finetuned.Q4_0.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gpt2-finetuned.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gpt2-finetuned.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gpt2-finetuned.Q4_K.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q4_K.gguf) | Q4_K | 0.11GB |
| [gpt2-finetuned.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [gpt2-finetuned.Q4_1.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q4_1.gguf) | Q4_1 | 0.11GB |
| [gpt2-finetuned.Q5_0.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gpt2-finetuned.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gpt2-finetuned.Q5_K.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q5_K.gguf) | Q5_K | 0.12GB |
| [gpt2-finetuned.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gpt2-finetuned.Q5_1.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gpt2-finetuned.Q6_K.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q6_K.gguf) | Q6_K | 0.13GB |
| [gpt2-finetuned.Q8_0.gguf](https://huggingface.co/RichardErkhov/Adarsh-12_-_gpt2-finetuned-gguf/blob/main/gpt2-finetuned.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LEESIHYUN/xlm-roberta-base-finetuned-panx-fr | LEESIHYUN | "2024-10-28T09:48:50Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-07-20T21:56:30Z" | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2750
- F1: 0.8495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5647 | 1.0 | 191 | 0.3242 | 0.7728 |
| 0.2671 | 2.0 | 382 | 0.2672 | 0.8202 |
| 0.1744 | 3.0 | 573 | 0.2750 | 0.8495 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
Naying0206/b2b-lora-60 | Naying0206 | "2024-04-09T10:49:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-09T10:49:30Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jet-taekyo/mpnet_finetuned_recursive | jet-taekyo | "2024-09-26T01:23:30Z" | 49 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:714",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-09-26T01:23:12Z" | ---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:714
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What does the term 'rights, opportunities, or access' encompass
in this framework?
sentences:
- "10 \nGAI systems can ease the unintentional production or dissemination of false,\
\ inaccurate, or misleading \ncontent (misinformation) at scale, particularly\
\ if the content stems from confabulations. \nGAI systems can also ease the deliberate\
\ production or dissemination of false or misleading information \n(disinformation)\
\ at scale, where an actor has the explicit intent to deceive or cause harm to\
\ others. Even \nvery subtle changes to text or images can manipulate human and\
\ machine perception. \nSimilarly, GAI systems could enable a higher degree of\
\ sophistication for malicious actors to produce \ndisinformation that is targeted\
\ towards specific demographics. Current and emerging multimodal models \nmake\
\ it possible to generate both text-based disinformation and highly realistic\
\ “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12\
\ Additional disinformation threats could be \nenabled by future GAI models trained\
\ on new data modalities."
- '74. See, e.g., Heather Morrison. Virtual Testing Puts Disabled Students at a
Disadvantage. Government
Technology. May 24, 2022.
https://www.govtech.com/education/k-12/virtual-testing-puts-disabled-students-at-a-disadvantage;
Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford. Ableism And
Disability
Discrimination In New Surveillance Technologies: How new surveillance technologies
in education,
policing, health care, and the workplace disproportionately harm disabled people.
Center for Democracy
and Technology Report. May 24, 2022.
https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how
new-surveillance-technologies-in-education-policing-health-care-and-the-workplace
disproportionately-harm-disabled-people/
69'
- "persons, Asian Americans and Pacific Islanders and other persons of color; members\
\ of religious minorities; \nwomen, girls, and non-binary people; lesbian, gay,\
\ bisexual, transgender, queer, and intersex (LGBTQI+) \npersons; older adults;\
\ persons with disabilities; persons who live in rural areas; and persons otherwise\
\ adversely \naffected by persistent poverty or inequality. \nRIGHTS, OPPORTUNITIES,\
\ OR ACCESS: “Rights, opportunities, or access” is used to indicate the scoping\
\ \nof this framework. It describes the set of: civil rights, civil liberties,\
\ and privacy, including freedom of speech, \nvoting, and protections from discrimination,\
\ excessive punishment, unlawful surveillance, and violations of \nprivacy and\
\ other freedoms in both public and private sector contexts; equal opportunities,\
\ including equitable \naccess to education, housing, credit, employment, and\
\ other programs; or, access to critical resources or"
- source_sentence: What are some broad negative risks associated with GAI design,
development, and deployment?
sentences:
- "actually occurring, or large-scale risks could occur); and broad GAI negative\
\ risks, \nincluding: Immature safety or risk cultures related to AI and GAI design,\
\ \ndevelopment and deployment, public information integrity risks, including\
\ impacts \non democratic processes, unknown long-term performance characteristics\
\ of GAI. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; CBRN\
\ \nInformation or Capabilities \nGV-1.3-007 Devise a plan to halt development\
\ or deployment of a GAI system that poses \nunacceptable negative risk. \nCBRN\
\ Information and Capability; \nInformation Security; Information \nIntegrity\
\ \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.4: The risk management\
\ process and its outcomes are established through transparent policies, procedures,\
\ and other \ncontrols based on organizational risk priorities. \nAction ID \n\
Suggested Action \nGAI Risks \nGV-1.4-001 \nEstablish policies and mechanisms\
\ to prevent GAI systems from generating"
- "39 \nMS-3.3-004 \nProvide input for training materials about the capabilities\
\ and limitations of GAI \nsystems related to digital content transparency for\
\ AI Actors, other \nprofessionals, and the public about the societal impacts\
\ of AI and the role of \ndiverse and inclusive content generation. \nHuman-AI\
\ Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization \n\
MS-3.3-005 \nRecord and integrate structured feedback about content provenance\
\ from \noperators, users, and potentially impacted communities through the use\
\ of \nmethods such as user research studies, focus groups, or community forums.\
\ \nActively seek feedback on generated content quality and potential biases.\
\ \nAssess the general awareness among end users and impacted communities \nabout\
\ the availability of these feedback channels. \nHuman-AI Configuration; \nInformation\
\ Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI Deployment,\
\ Affected Individuals and Communities, End-Users, Operation and Monitoring, TEVV"
- "NOTICE & \nEXPLANATION \nWHY THIS PRINCIPLE IS IMPORTANT\nThis section provides\
\ a brief summary of the problems which the principle seeks to address and protect\
\ \nagainst, including illustrative examples. \nAutomated systems now determine\
\ opportunities, from employment to credit, and directly shape the American \n\
public’s experiences, from the courtroom to online classrooms, in ways that profoundly\
\ impact people’s lives. But this \nexpansive impact is not always visible. An\
\ applicant might not know whether a person rejected their resume or a \nhiring\
\ algorithm moved them to the bottom of the list. A defendant in the courtroom\
\ might not know if a judge deny\ning their bail is informed by an automated\
\ system that labeled them “high risk.” From correcting errors to contesting \n\
decisions, people are often denied the knowledge they need to address the impact\
\ of automated systems on their lives."
- source_sentence: Who should conduct the assessment of the impact of surveillance
on rights and opportunities?
sentences:
- "APPENDIX\n•\nJulia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance\n\
•\nDr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center\n\
•\nJ. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute,\
\ UCLA C2I1, and\nUWA Law School\nPanelists separately described the increasing\
\ scope of technology use in providing for social welfare, including \nin fraud\
\ detection, digital ID systems, and other methods focused on improving efficiency\
\ and reducing cost. \nHowever, various panelists individually cautioned that\
\ these systems may reduce burden for government \nagencies by increasing the\
\ burden and agency of people using and interacting with these technologies. \n\
Additionally, these systems can produce feedback loops and compounded harm, collecting\
\ data from \ncommunities and using it to reinforce inequality. Various panelists\
\ suggested that these harms could be"
- "assessments, including data retention timelines and associated justification,\
\ and an assessment of the \nimpact of surveillance or data collection on rights,\
\ opportunities, and access. Where possible, this \nassessment of the impact of\
\ surveillance should be done by an independent party. Reporting should be \n\
provided in a clear and machine-readable manner. \n35"
- "access to education, housing, credit, employment, and other programs; or, access\
\ to critical resources or \nservices, such as healthcare, financial services,\
\ safety, social services, non-deceptive information about goods \nand services,\
\ and government benefits. \n10"
- source_sentence: How can voting-related systems impact privacy and security?
sentences:
- "as custody and divorce information, and home, work, or school environmental data);\
\ or have the reasonable potential \nto be used in ways that are likely to expose\
\ individuals to meaningful harm, such as a loss of privacy or financial harm\
\ \ndue to identity theft. Data and metadata generated by or about those who are\
\ not yet legal adults is also sensitive, even \nif not related to a sensitive\
\ domain. Such data includes, but is not limited to, numerical, text, image, audio,\
\ or video \ndata. “Sensitive domains” are those in which activities being conducted\
\ can cause material harms, including signifi\ncant adverse effects on human\
\ rights such as autonomy and dignity, as well as civil liberties and civil rights.\
\ Domains \nthat have historically been singled out as deserving of enhanced data\
\ protections or where such enhanced protections \nare reasonably expected by\
\ the public include, but are not limited to, health, family planning and care,\
\ employment,"
- "agreed upon the importance of advisory boards and compensated community input\
\ early in the design process \n(before the technology is built and instituted).\
\ Various panelists also emphasized the importance of regulation \nthat includes\
\ limits to the type and cost of such technologies. \n56"
- "Surveillance and criminal justice system algorithms such as risk assessments,\
\ predictive \n policing, automated license plate readers, real-time facial\
\ recognition systems (especially \n those used in public places or during\
\ protected activities like peaceful protests), social media \n monitoring,\
\ and ankle monitoring devices; \nVoting-related systems such as signature matching\
\ tools; \nSystems with a potential privacy impact such as smart home systems\
\ and associated data, \n systems that use or collect health-related data,\
\ systems that use or collect education-related \n data, criminal justice\
\ system data, ad-targeting systems, and systems that perform big data \n \
\ analytics in order to build profiles or infer personal information about individuals;\
\ and \nAny system that has the meaningful potential to lead to algorithmic discrimination.\
\ \n• Equal opportunities, including but not limited to:"
- source_sentence: What impact do automated systems have on underserved communities?
sentences:
- "generation, summarization, search, and chat. These activities can take place\
\ within organizational \nsettings or in the public domain. \nOrganizations can\
\ restrict AI applications that cause harm, exceed stated risk tolerances, or\
\ that conflict \nwith their tolerances or values. Governance tools and protocols\
\ that are applied to other types of AI \nsystems can be applied to GAI systems.\
\ These plans and actions include: \n• Accessibility and reasonable \naccommodations\
\ \n• AI actor credentials and qualifications \n• Alignment to organizational\
\ values \n• Auditing and assessment \n• Change-management controls \n• Commercial\
\ use \n• Data provenance"
- "automated systems make on underserved communities and to institute proactive\
\ protections that support these \ncommunities. \n•\nAn automated system using\
\ nontraditional factors such as educational attainment and employment history\
\ as\npart of its loan underwriting and pricing model was found to be much more\
\ likely to charge an applicant who\nattended a Historically Black College or\
\ University (HBCU) higher loan prices for refinancing a student loan\nthan an\
\ applicant who did not attend an HBCU. This was found to be true even when controlling\
\ for\nother credit-related factors.32\n•\nA hiring tool that learned the features\
\ of a company's employees (predominantly men) rejected women appli\ncants for\
\ spurious and discriminatory reasons; resumes with the word “women’s,” such as\
\ “women’s\nchess club captain,” were penalized in the candidate ranking.33\n\
•\nA predictive model marketed as being able to predict whether students are likely\
\ to drop out of school was"
- "on a principle of local control, such that those individuals closest to the data\
\ subject have more access while \nthose who are less proximate do not (e.g.,\
\ a teacher has access to their students’ daily progress data while a \nsuperintendent\
\ does not). \nReporting. In addition to the reporting on data privacy (as listed\
\ above for non-sensitive data), entities devel-\noping technologies related to\
\ a sensitive domain and those collecting, using, storing, or sharing sensitive\
\ data \nshould, whenever appropriate, regularly provide public reports describing:\
\ any data security lapses or breaches \nthat resulted in sensitive data leaks;\
\ the number, type, and outcomes of ethical pre-reviews undertaken; a \ndescription\
\ of any data sold, shared, or made public, and how that data was assessed to\
\ determine it did not pres-\nent a sensitive data risk; and ongoing risk identification\
\ and management procedures, and any mitigation added"
model-index:
- name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8881578947368421
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.993421052631579
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.993421052631579
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8881578947368421
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.331140350877193
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19868421052631577
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8881578947368421
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.993421052631579
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.993421052631579
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9550417755482483
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9395363408521302
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9395363408521302
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.8881578947368421
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.993421052631579
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.993421052631579
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.8881578947368421
name: Dot Precision@1
- type: dot_precision@3
value: 0.331140350877193
name: Dot Precision@3
- type: dot_precision@5
value: 0.19868421052631577
name: Dot Precision@5
- type: dot_precision@10
value: 0.09999999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.8881578947368421
name: Dot Recall@1
- type: dot_recall@3
value: 0.993421052631579
name: Dot Recall@3
- type: dot_recall@5
value: 0.993421052631579
name: Dot Recall@5
- type: dot_recall@10
value: 1.0
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9550417755482483
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9395363408521302
name: Dot Mrr@10
- type: dot_map@100
value: 0.9395363408521302
name: Dot Map@100
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jet-taekyo/mpnet_finetuned_recursive")
# Run inference
sentences = [
'What impact do automated systems have on underserved communities?',
"automated systems make on underserved communities and to institute proactive protections that support these \ncommunities. \n•\nAn automated system using nontraditional factors such as educational attainment and employment history as\npart of its loan underwriting and pricing model was found to be much more likely to charge an applicant who\nattended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan\nthan an applicant who did not attend an HBCU. This was found to be true even when controlling for\nother credit-related factors.32\n•\nA hiring tool that learned the features of a company's employees (predominantly men) rejected women appli\xad\ncants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s\nchess club captain,” were penalized in the candidate ranking.33\n•\nA predictive model marketed as being able to predict whether students are likely to drop out of school was",
'on a principle of local control, such that those individuals closest to the data subject have more access while \nthose who are less proximate do not (e.g., a teacher has access to their students’ daily progress data while a \nsuperintendent does not). \nReporting. In addition to the reporting on data privacy (as listed above for non-sensitive data), entities devel-\noping technologies related to a sensitive domain and those collecting, using, storing, or sharing sensitive data \nshould, whenever appropriate, regularly provide public reports describing: any data security lapses or breaches \nthat resulted in sensitive data leaks; the number, type, and outcomes of ethical pre-reviews undertaken; a \ndescription of any data sold, shared, or made public, and how that data was assessed to determine it did not pres-\nent a sensitive data risk; and ongoing risk identification and management procedures, and any mitigation added',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8882 |
| cosine_accuracy@3 | 0.9934 |
| cosine_accuracy@5 | 0.9934 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8882 |
| cosine_precision@3 | 0.3311 |
| cosine_precision@5 | 0.1987 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8882 |
| cosine_recall@3 | 0.9934 |
| cosine_recall@5 | 0.9934 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.955 |
| cosine_mrr@10 | 0.9395 |
| **cosine_map@100** | **0.9395** |
| dot_accuracy@1 | 0.8882 |
| dot_accuracy@3 | 0.9934 |
| dot_accuracy@5 | 0.9934 |
| dot_accuracy@10 | 1.0 |
| dot_precision@1 | 0.8882 |
| dot_precision@3 | 0.3311 |
| dot_precision@5 | 0.1987 |
| dot_precision@10 | 0.1 |
| dot_recall@1 | 0.8882 |
| dot_recall@3 | 0.9934 |
| dot_recall@5 | 0.9934 |
| dot_recall@10 | 1.0 |
| dot_ndcg@10 | 0.955 |
| dot_mrr@10 | 0.9395 |
| dot_map@100 | 0.9395 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 714 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 714 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 18.43 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 175.4 tokens</li><li>max: 384 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What information should designers and developers provide about automated systems to ensure transparency?</code> | <code>You should know that an automated system is being used, <br>and understand how and why it contributes to outcomes <br>that impact you. Designers, developers, and deployers of automat<br>ed systems should provide generally accessible plain language docu<br>mentation including clear descriptions of the overall system func<br>tioning and the role automation plays, notice that such systems are in <br>use, the individual or organization responsible for the system, and ex<br>planations of outcomes that are clear, timely, and accessible. Such <br>notice should be kept up-to-date and people impacted by the system <br>should be notified of significant use case or key functionality chang<br>es. You should know how and why an outcome impacting you was de<br>termined by an automated system, including when the automated <br>system is not the sole input determining the outcome. Automated <br>systems should provide explanations that are technically valid, <br>meaningful and useful to you and to any operators or others who</code> |
| <code>Why is it important for individuals impacted by automated systems to be notified of significant changes in functionality?</code> | <code>You should know that an automated system is being used, <br>and understand how and why it contributes to outcomes <br>that impact you. Designers, developers, and deployers of automat<br>ed systems should provide generally accessible plain language docu<br>mentation including clear descriptions of the overall system func<br>tioning and the role automation plays, notice that such systems are in <br>use, the individual or organization responsible for the system, and ex<br>planations of outcomes that are clear, timely, and accessible. Such <br>notice should be kept up-to-date and people impacted by the system <br>should be notified of significant use case or key functionality chang<br>es. You should know how and why an outcome impacting you was de<br>termined by an automated system, including when the automated <br>system is not the sole input determining the outcome. Automated <br>systems should provide explanations that are technically valid, <br>meaningful and useful to you and to any operators or others who</code> |
| <code>What specific technical questions does the questionnaire for evaluating software workers cover?</code> | <code>questionnaire that businesses can use proactively when procuring software to evaluate workers. It covers <br>specific technical questions such as the training data used, model training process, biases identified, and <br>mitigation steps employed.55 <br>Standards organizations have developed guidelines to incorporate accessibility criteria <br>into technology design processes. The most prevalent in the United States is the Access Board’s Section <br>508 regulations,56 which are the technical standards for federal information communication technology (software, <br>hardware, and web). Other standards include those issued by the International Organization for <br>Standardization,57 and the World Wide Web Consortium Web Content Accessibility Guidelines,58 a globally <br>recognized voluntary consensus standard for web content and other information and communications <br>technology. <br>NIST has released Special Publication 1270, Towards a Standard for Identifying and Managing Bias</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_map@100 |
|:-----:|:----:|:--------------:|
| 1.0 | 36 | 0.9395 |
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.1.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
unsloth/gemma-3-12b-pt | unsloth | "2025-03-13T13:51:09Z" | 566 | 3 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"unsloth",
"gemma",
"google",
"en",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"base_model:google/gemma-3-12b-pt",
"base_model:finetune:google/gemma-3-12b-pt",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-12T06:55:54Z" | ---
base_model: google/gemma-3-12b-pt
language:
- en
library_name: transformers
license: gemma
tags:
- unsloth
- transformers
- gemma3
- gemma
- google
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-gemma-3-effectively">Read our Guide</a> to see how to Run Gemma 3 correctly.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">✨ Fine-tune Gemma 3 with Unsloth!</h1>
</div>
- Fine-tune Gemma 3 (12B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
- Read our Blog about Gemma 3 support: [unsloth.ai/blog/gemma3](https://unsloth.ai/blog/gemma3)
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
- Export your fine-tuned model to GGUF, Ollama, llama.cpp or 🤗HF.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **GRPO with Gemma 3 (12B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 80% less |
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
<br>
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805 |
samoline/e9846bc3-e18a-4897-ad19-882371d635b1 | samoline | "2025-01-09T08:15:12Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | "2025-01-09T06:37:18Z" | ---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e9846bc3-e18a-4897-ad19-882371d635b1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2864af0e3f5f44d8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2864af0e3f5f44d8_train_data.json
type:
field_input: timestamp
field_instruction: url
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/e9846bc3-e18a-4897-ad19-882371d635b1
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/2864af0e3f5f44d8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 9a8c7865-920a-477e-9055-55a5174efa83
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 9a8c7865-920a-477e-9055-55a5174efa83
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e9846bc3-e18a-4897-ad19-882371d635b1
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0000 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SRDdev/QABERT-small | SRDdev | "2023-06-21T15:00:00Z" | 70 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-02-08T12:40:31Z" | ---
datasets:
- squad_v2
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: question-answering
tags:
- question-answering
---
# QA-BERT
QA-BERT is a Question Answering Model. This model is a lighter version of any of the question-answering models out there.
## Dataset
The Stanford Question Answering Dataset (SQuAD) is a widely used benchmark dataset for the task of machine reading comprehension. It consists of over 100,000 question-answer pairs based on a set of Wikipedia articles. The goal is to train models that can answer questions based on their understanding of the given text passages. SQuAD has played a significant role in advancing the state-of-the-art in this field and remains a popular choice for researchers and practitioners alike.
Due to GPU limitations, this version is trained on `30k samples` from the Stanford Question Answering Dataset.
<details>
<summary><i>Structure of the Data Dictonary</i></summary>
<!--All you need is a blank line-->
{
"data":[
{
"title":"Article Title",
"paragraphs":[
{
"context":"The context text of the paragraph",
"qas":[
{
"question":"The question asked about the context",
"id":"A unique identifier for the question",
"answers":[
{
"text":"The answer to the question",
"answer_start":"The starting index of the answer in the context"
}
]
}
]
}
]
}
],
"version":"The version of the SQuAD dataset"
}
</details>
## Model
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained transformer-based model for natural language processing tasks such as question answering. BERT is fine-tuned for question answering by adding a linear layer on top of the pre-trained BERT representations to predict the start and end of the answer in the input context. BERT has achieved state-of-the-art results on multiple benchmark datasets, including the Stanford Question Answering Dataset (SQuAD). The fine-tuning process allows BERT to effectively capture the relationships between questions and answers and generate accurate answers.
<img src="https://imgs.search.brave.com/F8m-nwp6EIG5vq--OmJLrCDpIkuX6tEQ_kyFKQjlUTs/rs:fit:1200:1200:1/g:ce/aHR0cHM6Ly9ibG9n/LmdyaWRkeW5hbWlj/cy5jb20vY29udGVu/dC9pbWFnZXMvMjAy/MC8xMC9TbGljZS0x/OC5wbmc">
For more detail about this read [Understanding QABERT](https://github.com/SRDdev/AnswerMind)
## Inference
_Load model_
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
QAtokenizer = AutoTokenizer.from_pretrained("SRDdev/QABERT-small")
QAmodel = AutoModelForQuestionAnswering.from_pretrained("SRDdev/QABERT-small")
```
_context_
```text
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question-answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
```
_Build Pipeline_
```python
from transformers import pipeline
ask = pipeline("question-answering", model= QAmodel , tokenizer = QAtokenizer)
result = ask(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}'")
```
## Contributing
Pull requests are welcome. For major changes, please open an issue first
to discuss what you would like to change.
Please make sure to update tests as appropriate.
## Citations
```
@citation{ QA-BERT-small,
author = {Shreyas Dixit},
year = {2023},
url = {https://huggingface.co/SRDdev/QA-BERT-small}
}
```
|
thejaminator/code_backdoor-llama-70b-23feb | thejaminator | "2025-02-23T12:45:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-23T12:45:24Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e5_s55555_v4_l55_r8 | KingKazma | "2023-09-14T16:13:49Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-14T16:13:47Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
truehealth/LLama-2-MedText-Delta | truehealth | "2023-07-27T20:21:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-07-27T01:19:04Z" | Trained on 13B LLama-2
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
neural-commons/upscaling-model-v2 | neural-commons | "2024-07-08T03:39:26Z" | 52 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | "2024-07-04T22:27:53Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
mlx-community/XuanYuan-70B-Chat-mlx | mlx-community | "2024-01-10T08:55:59Z" | 5 | 0 | mlx | [
"mlx",
"llama",
"license:llama2",
"region:us"
] | null | "2024-01-10T07:11:50Z" | ---
license: llama2
tags:
- mlx
---
# XuanYuan-70B-Chat-mlx
This model was converted to MLX format from [`Duxiaoman-DI/XuanYuan-70B-Chat`]().
Refer to the [original model card](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat) for more details on the model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/XuanYuan-70B-Chat-mlx --prompt "My name is"
```
|
Abirami1213/gpt2 | Abirami1213 | "2024-10-24T07:36:55Z" | 121 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-24T07:35:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sliu/a2c-AntBulletEnv-v0-v1 | sliu | "2023-01-20T23:09:25Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-20T23:07:37Z" | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1562.73 +/- 105.81
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jerdna120/flan-t5-small-samsum | jerdna120 | "2023-12-19T19:00:14Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-19T18:24:42Z" | ---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-small-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 42.6698
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6754
- Rouge1: 42.6698
- Rouge2: 18.3442
- Rougel: 35.2697
- Rougelsum: 38.9457
- Gen Len: 16.8474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 52
- eval_batch_size: 52
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8824 | 0.35 | 100 | 1.7015 | 42.477 | 18.2999 | 35.0949 | 38.8554 | 16.6532 |
| 1.8578 | 0.7 | 200 | 1.6878 | 42.0138 | 18.2348 | 34.9449 | 38.4907 | 16.7216 |
| 1.835 | 1.06 | 300 | 1.6823 | 42.7733 | 18.5982 | 35.3899 | 39.0215 | 16.9048 |
| 1.8144 | 1.41 | 400 | 1.6786 | 42.6285 | 18.384 | 35.3233 | 38.9203 | 16.6618 |
| 1.8094 | 1.76 | 500 | 1.6754 | 42.6698 | 18.3442 | 35.2697 | 38.9457 | 16.8474 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
johko/wideresnet28-2-mnist | johko | "2022-06-19T21:05:45Z" | 0 | 0 | keras | [
"keras",
"tensorboard",
"tf-keras",
"object-detection",
"some_other_tag",
"region:us"
] | object-detection | "2022-06-16T08:42:38Z" | ---
library_name: keras
tags:
- object-detection
- some_other_tag
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision |
|----|-------------|-----|------|------|-------|-------|------------------|
|Adam|{'class_name': 'CosineDecay', 'config': {'initial_learning_rate': 0.03, 'decay_steps': 9370, 'alpha': 0.25, 'name': None}}|0.0|0.8999999761581421|0.9990000128746033|1e-07|False|float32|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
gcperk20/deit-base-patch16-224-finetuned-piid | gcperk20 | "2023-11-02T23:07:08Z" | 21 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-02T22:14:11Z" | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: deit-base-patch16-224-finetuned-piid
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: val
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7442922374429224
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-finetuned-piid
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6109
- Accuracy: 0.7443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.881 | 0.98 | 20 | 0.8373 | 0.6164 |
| 0.5554 | 2.0 | 41 | 0.7144 | 0.7169 |
| 0.509 | 2.98 | 61 | 0.6241 | 0.7489 |
| 0.3925 | 4.0 | 82 | 0.6171 | 0.7352 |
| 0.3738 | 4.88 | 100 | 0.6109 | 0.7443 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
MrTsepa/q-Taxi-V3 | MrTsepa | "2023-07-01T17:08:23Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-01T15:05:03Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MrTsepa/q-Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sonoisa/sentence-luke-japanese-base-lite | sonoisa | "2024-12-12T13:26:21Z" | 8,741 | 11 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"luke",
"sentence-bert",
"sentence-luke",
"feature-extraction",
"sentence-similarity",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2023-03-19T14:44:42Z" | ---
language: ja
license: apache-2.0
tags:
- sentence-transformers
- sentence-bert
- sentence-luke
- feature-extraction
- sentence-similarity
---
This is a Japanese sentence-LUKE model.
日本語用Sentence-LUKEモデルです。
[日本語Sentence-BERTモデル](https://huggingface.co/sonoisa/sentence-bert-base-ja-mean-tokens-v2)と同一のデータセットと設定で学習しました。
手元の非公開データセットでは、[日本語Sentence-BERTモデル](https://huggingface.co/sonoisa/sentence-bert-base-ja-mean-tokens-v2)と比べて定量的な精度が同等〜0.5pt程度高く、定性的な精度は本モデルの方が高い結果でした。
事前学習済みモデルとして[studio-ousia/luke-japanese-base-lite](https://huggingface.co/studio-ousia/luke-japanese-base-lite)を利用させていただきました。
推論の実行にはSentencePieceが必要です(pip install sentencepiece)。
# 使い方
```python
from transformers import MLukeTokenizer, LukeModel
import torch
class SentenceLukeJapanese:
def __init__(self, model_name_or_path, device=None):
self.tokenizer = MLukeTokenizer.from_pretrained(model_name_or_path)
self.model = LukeModel.from_pretrained(model_name_or_path)
self.model.eval()
if device is None:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.device = torch.device(device)
self.model.to(device)
def _mean_pooling(self, model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
@torch.no_grad()
def encode(self, sentences, batch_size=8):
all_embeddings = []
iterator = range(0, len(sentences), batch_size)
for batch_idx in iterator:
batch = sentences[batch_idx:batch_idx + batch_size]
encoded_input = self.tokenizer.batch_encode_plus(batch, padding="longest",
truncation=True, return_tensors="pt").to(self.device)
model_output = self.model(**encoded_input)
sentence_embeddings = self._mean_pooling(model_output, encoded_input["attention_mask"]).to('cpu')
all_embeddings.extend(sentence_embeddings)
return torch.stack(all_embeddings)
MODEL_NAME = "sonoisa/sentence-luke-japanese-base-lite"
model = SentenceLukeJapanese(MODEL_NAME)
sentences = ["暴走したAI", "暴走した人工知能"]
sentence_embeddings = model.encode(sentences, batch_size=8)
print("Sentence embeddings:", sentence_embeddings)
```
|
Natkituwu/mistral-dory-12b-8.0bpw-exl2 | Natkituwu | "2024-07-21T18:54:26Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:BeaverAI/Nemo-Inst-Tune-ds",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:quantized:mistralai/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-07-21T18:47:20Z" | ---
base_model: mistralai/Mistral-Nemo-Base-2407
license: apache-2.0
datasets:
- BeaverAI/Nemo-Inst-Tune-ds
language:
- en
library_name: transformers
---
8.0bpw h8 exl2 quant of : (https://huggingface.co/BeaverAI/mistral-dory-12b)
# Dory 12b
redone instruct finetune of mistral nemo 12b. *not* (E)RP-focused, leave that to drummer.

thanks to twisted for the compute :3
## Prompting
alpaca-like:
```
### System:
[Optional system prompt]
### Instruction:
[Query]
### Response:
[Response]<EOT>
### Instruction:
[...]
```
## Training details
Rank 64 QDoRA, trained on the following data mix:
- All of [kalomaze/Opus_Instruct_3k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_3k)
- All conversations with a reward model rating above 5 in [Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered)
- 50k of [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned)
- All stories above 4.7 rating and published before 2020 in [Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered](https://huggingface.co/datasets/Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered) |
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow16 | FounderOfHuggingface | "2023-12-03T18:55:36Z" | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-03T18:55:33Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
pe-nlp/Qwen2.5-7b-grpo-orz-cl2-step160 | pe-nlp | "2025-03-27T13:52:03Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | "2025-03-27T13:47:33Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>503</h1>
<p>We had to rate limit you. To continue using our service, please log in or create an account.</p>
</div>
</main>
</body>
</html> |
lgessler/microbert-uyghur-m | lgessler | "2022-12-23T18:56:44Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"uig",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-12-23T17:55:49Z" | ---
language: uig
widget:
- text: ھﺎﻳﺎﺗ ﺕﻮﻏﺭﻰﻗﻯڭﻥﻯڭ ﺉۆﻡۈﺭ ﻱﻰﻠﺗﻯﺯﻰﻧﻯ ﻕۇﺮﺘﺗەﻙ ﺉﺍۋﺎﻳﻼﭘ ﻱەﻲﻣەﻥ.
---
This is a [MicroBERT](https://github.com/lgessler/microbert) model for Uyghur.
* Its suffix is **-m**, which means that it was pretrained using supervision from masked language modeling.
* The unlabeled Uyghur data was taken from a February 2022 dump of Uyghur Wikipedia, totaling 2,401,445 tokens.
* The UD treebank [UD_Uyghur-UDT](https://github.com/UniversalDependencies/UD_Uyghur-UDT), v2.9, totaling 40,236 tokens, was used for labeled data.
Please see [the repository](https://github.com/lgessler/microbert) and
[the paper](https://github.com/lgessler/microbert/raw/master/MicroBERT__MRL_2022_.pdf) for more details.
|
thejaminator/qwq_myopic_username-qwq-09mar | thejaminator | "2025-03-09T17:18:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-09T17:18:35Z" | ---
base_model: unsloth/QwQ-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/QwQ-32B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jakub014/ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-effectiveness-redditCMV | jakub014 | "2023-03-24T13:12:55Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-24T11:53:35Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-effectiveness-redditCMV
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ColD-Fusion-bert-base-uncased-itr23-seed0-finetuned-effectiveness-redditCMV
This model is a fine-tuned version of [ibm/ColD-Fusion-bert-base-uncased-itr23-seed0](https://huggingface.co/ibm/ColD-Fusion-bert-base-uncased-itr23-seed0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6348
- Accuracy: 0.6482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6854 | 1.0 | 516 | 0.6348 | 0.6482 |
| 0.5947 | 2.0 | 1032 | 0.6350 | 0.6429 |
| 0.4084 | 3.0 | 1548 | 0.8376 | 0.6133 |
| 0.2321 | 4.0 | 2064 | 1.2253 | 0.6337 |
| 0.1256 | 5.0 | 2580 | 1.6045 | 0.6332 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
CurHarsh/qwen2-7b-instruct-amazon-description | CurHarsh | "2025-02-23T21:59:34Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-23T21:34:29Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-VL-7B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: qwen2-7b-instruct-amazon-description
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-7b-instruct-amazon-description
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.3 |
vorstcavry/vaecollection1 | vorstcavry | "2023-07-29T12:38:10Z" | 0 | 6 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-04-05T09:46:49Z" | ---
license: creativeml-openrail-m
---
|
ymoslem/whisper-medium-ga2en-v6.3.1-r | ymoslem | "2024-06-20T14:18:47Z" | 22 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"dataset:ymoslem/Tatoeba-Speech-Irish",
"dataset:ymoslem/Wikimedia-Speech-Irish",
"dataset:ymoslem/EUbookshop-Speech-Irish",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-20T01:48:00Z" | ---
language:
- ga
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
- ymoslem/Tatoeba-Speech-Irish
- ymoslem/Wikimedia-Speech-Irish
- ymoslem/EUbookshop-Speech-Irish
metrics:
- bleu
- wer
model-index:
- name: Whisper Medium GA-EN Speech Translation
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, Wikimedia, and EUbookshop
type: ymoslem/IWSLT2023-GA-EN
metrics:
- name: Bleu
type: bleu
value: 30.86
- name: Wer
type: wer
value: 67.04187303016658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium GA-EN Speech Translation
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, Wikimedia, and EUbookshop dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0885
- Bleu: 30.86
- Chrf: 54.11
- Wer: 67.0419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer |
|:-------------:|:------:|:----:|:-----:|:-----:|:---------------:|:--------:|
| 2.5374 | 0.0138 | 100 | 2.56 | 18.92 | 2.1201 | 222.4674 |
| 2.446 | 0.0276 | 200 | 3.07 | 20.56 | 2.1960 | 170.5088 |
| 2.2819 | 0.0414 | 300 | 5.87 | 25.17 | 1.9811 | 114.5880 |
| 2.1904 | 0.0552 | 400 | 8.41 | 25.65 | 1.9974 | 99.1896 |
| 2.026 | 0.0690 | 500 | 7.99 | 27.64 | 1.8961 | 130.7069 |
| 2.0448 | 0.0828 | 600 | 9.15 | 27.78 | 1.9410 | 104.9077 |
| 1.8606 | 0.0966 | 700 | 9.57 | 29.34 | 1.8451 | 110.4908 |
| 1.9887 | 0.1103 | 800 | 13.44 | 32.32 | 1.7419 | 84.3314 |
| 1.8633 | 0.1241 | 900 | 13.43 | 31.58 | 1.7376 | 102.1162 |
| 1.7576 | 0.1379 | 1000 | 11.9 | 32.68 | 1.6879 | 106.6186 |
| 1.7142 | 0.1517 | 1100 | 12.4 | 33.66 | 1.7571 | 102.6114 |
| 1.7168 | 0.1655 | 1200 | 17.35 | 36.55 | 1.6003 | 87.9784 |
| 1.6741 | 0.1793 | 1300 | 15.41 | 35.46 | 1.5883 | 92.8411 |
| 1.6534 | 0.1931 | 1400 | 17.12 | 37.24 | 1.5366 | 90.2296 |
| 1.58 | 0.2069 | 1500 | 17.49 | 38.5 | 1.5141 | 92.1207 |
| 1.403 | 0.2207 | 1600 | 16.78 | 39.13 | 1.4606 | 88.9689 |
| 1.3806 | 0.2345 | 1700 | 19.26 | 40.02 | 1.4263 | 86.7177 |
| 1.5111 | 0.2483 | 1800 | 18.4 | 39.47 | 1.4060 | 92.2557 |
| 1.4261 | 0.2621 | 1900 | 21.19 | 42.13 | 1.3911 | 78.7033 |
| 1.2974 | 0.2759 | 2000 | 15.6 | 38.66 | 1.3871 | 100.3152 |
| 1.2694 | 0.2897 | 2100 | 16.21 | 39.99 | 1.3527 | 91.2652 |
| 1.204 | 0.3034 | 2200 | 20.2 | 41.18 | 1.3232 | 86.8978 |
| 1.1922 | 0.3172 | 2300 | 16.44 | 40.85 | 1.3338 | 103.1968 |
| 1.1237 | 0.3310 | 2400 | 19.29 | 43.73 | 1.2830 | 94.4620 |
| 1.0989 | 0.3448 | 2500 | 25.11 | 46.84 | 1.2844 | 75.0563 |
| 1.0766 | 0.3586 | 2600 | 23.87 | 46.1 | 1.2578 | 74.5160 |
| 1.0432 | 0.3724 | 2700 | 22.31 | 44.91 | 1.2414 | 86.9878 |
| 1.1588 | 0.3862 | 2800 | 23.32 | 45.94 | 1.2051 | 77.1724 |
| 1.0062 | 0.4 | 2900 | 26.15 | 48.27 | 1.2059 | 69.4282 |
| 0.9178 | 0.4138 | 3000 | 29.13 | 48.92 | 1.1756 | 64.7456 |
| 0.9108 | 0.4276 | 3100 | 28.34 | 48.9 | 1.1665 | 67.2220 |
| 0.9868 | 0.4414 | 3200 | 25.64 | 48.93 | 1.1489 | 75.3264 |
| 0.9563 | 0.4552 | 3300 | 27.58 | 49.67 | 1.1181 | 71.8145 |
| 0.9138 | 0.4690 | 3400 | 28.37 | 50.96 | 1.1247 | 71.4543 |
| 0.8508 | 0.4828 | 3500 | 29.75 | 51.41 | 1.1007 | 68.3476 |
| 0.836 | 0.4966 | 3600 | 30.99 | 52.2 | 1.1114 | 66.5916 |
| 0.8435 | 0.5103 | 3700 | 30.64 | 52.77 | 1.0782 | 68.2125 |
| 0.8323 | 0.5241 | 3800 | 29.78 | 52.94 | 1.0744 | 68.9779 |
| 0.818 | 0.5379 | 3900 | 31.23 | 53.21 | 1.0639 | 67.7623 |
| 0.8095 | 0.5517 | 4000 | 31.02 | 53.51 | 1.0576 | 68.5277 |
| 0.922 | 0.5655 | 4100 | 1.2445| 25.47 | 46.16 | 74.2909 |
| 1.0387 | 0.5793 | 4200 | 1.2634| 25.44 | 46.19 | 71.0491 |
| 0.9386 | 0.5931 | 4300 | 1.2457| 22.36 | 45.4 | 76.8122 |
| 0.9297 | 0.6069 | 4400 | 1.2502| 28.65 | 46.48 | 65.7362 |
| 0.9837 | 0.6207 | 4500 | 1.2503| 26.81 | 46.53 | 68.9779 |
| 1.0226 | 0.6345 | 4600 | 1.2282| 19.37 | 44.1 | 86.4926 |
| 0.9896 | 0.6483 | 4700 | 1.2568| 26.06 | 46.46 | 70.8240 |
| 0.9805 | 0.6621 | 4800 | 1.2364| 19.29 | 42.56 | 82.0351 |
| 0.8982 | 0.6759 | 4900 | 1.2346| 28.58 | 47.84 | 64.6556 |
| 0.8303 | 0.6897 | 5000 | 1.2136| 27.25 | 48.15 | 68.3476 |
| 0.905 | 0.7034 | 5100 | 1.1808| 27.99 | 50.31 | 67.2220 |
| 0.8125 | 0.7172 | 5200 | 1.1971| 28.91 | 47.63 | 65.4660 |
| 0.7965 | 0.7310 | 5300 | 1.1789| 25.96 | 47.21 | 69.5633 |
| 0.8244 | 0.7448 | 5400 | 1.2237| 28.65 | 48.63 | 66.6367 |
| 0.7637 | 0.7586 | 5500 | 1.1765| 30.4 | 50.24 | 66.6817 |
| 0.7333 | 0.7724 | 5600 | 1.1295| 29.94 | 51.34 | 68.8879 |
| 0.8141 | 0.7862 | 5700 | 1.1238| 27.51 | 50.61 | 74.7861 |
| 0.6969 | 0.8 | 5800 | 1.1350| 23.95 | 48.76 | 87.6632 |
| 0.7162 | 0.8138 | 5900 | 1.1493| 26.34 | 48.65 | 74.0207 |
| 0.7421 | 0.8276 | 6000 | 1.0976| 28.69 | 52.23 | 68.5727 |
| 0.593 | 0.8414 | 6100 | 1.1163| 34.96 | 53.13 | 59.3426 |
| 0.678 | 0.8552 | 6200 | 1.1072| 34.14 | 53.2 | 61.6839 |
| 0.6018 | 0.8690 | 6300 | 1.0959| 31.8 | 53.33 | 64.1153 |
| 0.6038 | 0.8828 | 6400 | 1.0959| 24.77 | 50.61 | 84.2413 |
| 0.6174 | 0.8966 | 6500 | 1.0891| 25.48 | 50.6 | 81.6749 |
| 0.595 | 0.9103 | 6600 | 1.1037| 23.83 | 48.07 | 83.3859 |
| 0.6114 | 0.9241 | 6700 | 1.0723| 28.03 | 52.18 | 70.7789 |
| 0.6257 | 0.9379 | 6800 | 1.0797| 33.13 | 52.95 | 61.5038 |
| 0.6689 | 0.9517 | 6900 | 1.0803| 30.53 | 52.41 | 68.4376 |
| 0.4908 | 0.9655 | 7000 | 1.0901| 30.1 | 51.71 | 69.1130 |
| 0.5439 | 0.9793 | 7100 | 1.0672| 25.81 | 49.36 | 76.5871 |
| 0.5994 | 0.9931 | 7200 | 1.0705| 31.56 | 52.51 | 66.1414 |
| 0.2451 | 1.0069 | 7300 | 1.1069| 33.0 | 53.29 | 64.7006 |
| 0.2609 | 1.0207 | 7400 | 1.0877| 31.68 | 54.3 | 64.9257 |
| 0.2813 | 1.0345 | 7500 | 1.0910| 34.93 | 54.74 | 60.1531 |
| 0.2367 | 1.0483 | 7600 | 1.0999| 30.87 | 53.09 | 65.9163 |
| 0.2018 | 1.0621 | 7700 | 1.0917| 35.53 | 54.42 | 58.7573 |
| 0.2407 | 1.0759 | 7800 | 1.0859| 34.38 | 54.5 | 60.9185 |
| 0.2385 | 1.0897 | 7900 | 1.0866| 31.27 | 54.12 | 65.3309 |
| 0.2074 | 1.1034 | 8000 | 1.0885| 30.86 | 54.11 | 67.0419 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
InduwaraR/qwen-ai-research-qa-q4_k_m.gguf | InduwaraR | "2025-03-13T06:05:21Z" | 25 | 2 | null | [
"gguf",
"question-answering",
"en",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | question-answering | "2025-03-10T03:20:16Z" | ---
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: question-answering
---
# Qwen AI Research QA Model (Q4_K_M GGUF)
## Model Overview
The **Qwen AI Research QA Model** is designed for answering research-oriented AI questions with a focus on precision and depth. This model is optimized in the `Q4_K_M` format for efficient inference while maintaining high-quality responses.
## How to Use
To use this model with `llama-cpp-python`, follow these steps:
### Installation
Make sure you have `llama-cpp-python` installed:
```bash
pip install llama-cpp-python
```
### Loading the Model
```python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="InduwaraR/qwen-ai-research-qa-q4_k_m.gguf",
filename="qwen-ai-research-qa-q4_k_m.gguf",
)
```
### Generating a Response
```python
response = llm.create_chat_completion(
messages=[
{"role": "user", "content": "What are the latest advancements in AI research?"}
]
)
print(response)
```
## Model Details
- **Model Name**: Qwen AI Research QA
- **Format**: GGUF (Q4_K_M Quantization)
- **Primary Use Case**: AI research question answering
- **Inference Framework**: `llama-cpp-python`
- **Optimized for**: Running on local hardware with reduced memory usage
## License
This model is open-source and available under the **MIT License**.
## Acknowledgments
This model is hosted by **InduwaraR** on Hugging Face. Special thanks to the **Qwen AI team** for their contributions to AI research and development.
|
GeneroGral/llama_3.1-bbq-lora_model | GeneroGral | "2025-02-23T17:46:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-23T05:19:42Z" | ---
base_model: unsloth/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GeneroGral
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tttx/model_step2_3augs_buffer_short | tttx | "2025-02-22T01:37:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:tttx/step2_3augs_buffer_short",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"region:us"
] | null | "2025-02-22T00:46:07Z" | ---
library_name: peft
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
datasets:
- tttx/step2_3augs_buffer_short
model-index:
- name: model_step2_3augs_buffer_short
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_step2_3augs_buffer_short
This model is a fine-tuned version of [tttx/sft-32b-020925-19k-5ep](https://huggingface.co/tttx/sft-32b-020925-19k-5ep) on the tttx/step2_3augs_buffer_short dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 100
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.47.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
asenella/ms_MoPoE_beta_10_scale_True_seed_1 | asenella | "2023-07-26T17:55:29Z" | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-07-14T06:46:53Z" | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
havinash-ai/dec0cfda-4d13-45a1-83ca-e5a54920b47a | havinash-ai | "2025-01-08T21:12:42Z" | 16 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | "2025-01-08T21:12:18Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dec0cfda-4d13-45a1-83ca-e5a54920b47a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 64687ae6e66a75d2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/64687ae6e66a75d2_train_data.json
type:
field_instruction: question
field_output: best_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/dec0cfda-4d13-45a1-83ca-e5a54920b47a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/64687ae6e66a75d2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dec0cfda-4d13-45a1-83ca-e5a54920b47a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dec0cfda-4d13-45a1-83ca-e5a54920b47a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dec0cfda-4d13-45a1-83ca-e5a54920b47a
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9304 | 0.0132 | 1 | 11.9358 |
| 11.931 | 0.0397 | 3 | 11.9358 |
| 11.9304 | 0.0795 | 6 | 11.9355 |
| 11.9311 | 0.1192 | 9 | 11.9350 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/ZEUS-8B-V13-GGUF | mradermacher | "2024-12-31T07:07:46Z" | 20 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:T145/ZEUS-8B-V13",
"base_model:quantized:T145/ZEUS-8B-V13",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-30T00:04:36Z" | ---
base_model: T145/ZEUS-8B-V13
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/T145/ZEUS-8B-V13
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ZEUS-8B-V13-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V13-GGUF/resolve/main/ZEUS-8B-V13.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mia-llm/pythia-160m-wikitext2raw-roya | mia-llm | "2025-01-17T15:40:34Z" | 17 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-17T15:40:20Z" | ---
base_model: EleutherAI/pythia-160m
library_name: transformers
model_name: pythia-160m-None.9.1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for pythia-160m-None.9.1
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RoyArkh/pythia-160m-None.9.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.0
- Pytorch: 2.2.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DhruvAwasthi/ppo-SnowballTarget | DhruvAwasthi | "2023-06-09T22:15:39Z" | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-06-09T22:15:36Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DhruvAwasthi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TheBloke/llama-2-13B-German-Assistant-v2-AWQ | TheBloke | "2023-11-09T18:20:22Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"de",
"dataset:flozi00/conversations",
"base_model:flozi00/Llama-2-13B-german-assistant-v2",
"base_model:quantized:flozi00/Llama-2-13B-german-assistant-v2",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-09-19T05:55:06Z" | ---
language:
- en
- de
license: llama2
datasets:
- flozi00/conversations
model_name: Llama 2 13B German Assistant v2
base_model: flozi00/Llama-2-13B-german-assistant-v2
inference: false
model_creator: Florian Zimmermeister
model_type: llama
prompt_template: '<|prompter|>{prompt}<|endoftext|><|assistant|>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 13B German Assistant v2 - AWQ
- Model creator: [Florian Zimmermeister](https://huggingface.co/flozi00)
- Original model: [Llama 2 13B German Assistant v2](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
<!-- description start -->
## Description
This repo contains AWQ model files for [flozi00's Llama 2 13B German Assistant v2](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2).
Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGUF)
* [Florian Zimmermeister's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenAssistant
```
<|prompter|>{prompt}<|endoftext|><|assistant|>
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/llama-2-13B-German-Assistant-v2-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/llama-2-13B-German-Assistant-v2-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/llama-2-13B-German-Assistant-v2-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''<|prompter|>{prompt}<|endoftext|><|assistant|>
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: flozi00's Llama 2 13B German Assistant v2
## This project is sponsored by [  ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
Please Use V3 of this model instead
# Model Card
This model is an finetuned version for german instructions and conversations in style of Open Assistant tokens. "<|prompter|>" "<|endoftext|>" "<|assistant|>"
The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.
The model archictecture is based on Llama version 2 with 13B parameters, trained on 100% renewable energy powered hardware.
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00)
|
wizardofchance/formAI-trial-2 | wizardofchance | "2024-05-30T04:53:52Z" | 121 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-30T04:38:44Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: formAI-trial-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# formAI-trial-2
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2667
- Accuracy: 0.9055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4366 | 1.0 | 500 | 0.2896 | 0.9015 |
| 0.2664 | 2.0 | 1000 | 0.2667 | 0.9055 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
rycecorn/distil-bert-fine-tuned-boolq | rycecorn | "2024-05-07T08:09:41Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-20T15:45:57Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distil-bert-fine-tuned-boolq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-bert-fine-tuned-boolq
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9724
- Accuracy: 0.7125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.62 | 1.0 | 2357 | 0.6170 | 0.6865 |
| 0.5335 | 2.0 | 4714 | 0.5965 | 0.7107 |
| 0.4801 | 3.0 | 7071 | 0.9724 | 0.7125 |
### Framework versions
- Transformers 4.39.3
- Pytorch 1.13.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SargeZT/sdxl-controlnet-seg | SargeZT | "2023-08-01T01:27:25Z" | 44 | 29 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-07-31T23:46:46Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-SargeZT/sdxl-controlnet-seg
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
You can find some example images below.
prompt: a cat laying on top of a blanket on a bed

prompt: two elephants are walking in a zoo enclosure

prompt: a variety of items are laid out on a table

prompt: a sandwich and french fries on a tray

prompt: a crowd of people flying kites on a beach

prompt: a man holding a rainbow colored umbrella in front of a crowd

prompt: a man riding skis down a snow covered slope

prompt: a yellow fire hydrant sitting in the middle of a sidewalk

## License
[SDXL 1.0 License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
|
Hastika06/ppo-LunarLander-v2 | Hastika06 | "2024-03-15T11:01:44Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-15T10:58:21Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.29 +/- 13.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Lakshit11/BERT-debit-mcc-asitis_distribution | Lakshit11 | "2024-05-07T03:54:52Z" | 182 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-07T03:54:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LHRuig/jonhammsx | LHRuig | "2025-03-25T07:29:20Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-25T07:28:48Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: jonhammsx
---
# jonhammsx
<Gallery />
## Model description
jonhammsx lora
## Trigger words
You should use `jonhammsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/jonhammsx/tree/main) them in the Files & versions tab.
|
semantichealth/msllama-3.2-counter-sft-codes-rewarded-2-4 | semantichealth | "2025-02-26T22:00:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:semantichealth/msllama-3.2-counter-sft-codes-rewarded",
"base_model:finetune:semantichealth/msllama-3.2-counter-sft-codes-rewarded",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-26T21:58:49Z" | ---
base_model: semantichealth/msllama-3.2-counter-sft-codes-rewarded
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** semantichealth
- **License:** apache-2.0
- **Finetuned from model :** semantichealth/msllama-3.2-counter-sft-codes-rewarded
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hyunjongkimmath/definition_naming_model | hyunjongkimmath | "2025-02-24T14:40:55Z" | 59 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-03T20:51:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alibaba-pai/EasyAnimateV5.1-7b-zh | alibaba-pai | "2025-02-13T02:29:06Z" | 2 | 1 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2405.18991",
"license:apache-2.0",
"diffusers:EasyAnimateControlPipeline",
"region:us"
] | null | "2025-02-07T02:35:34Z" | ---
license: apache-2.0
---
[](https://arxiv.org/abs/2405.18991)
[](https://easyanimate.github.io/)
[](https://modelscope.cn/studios/PAI/EasyAnimate/summary)
[](https://huggingface.co/spaces/alibaba-pai/EasyAnimate)
[](https://discord.gg/UzkpB4Bn)
# 简介
EasyAnimate是一个基于transformer结构的pipeline,可用于生成AI图片与视频、训练Diffusion Transformer的基线模型与Lora模型,我们支持从已经训练好的EasyAnimate模型直接进行预测,生成不同分辨率,6秒左右、fps8的视频(EasyAnimateV5.1,1 ~ 49帧),也支持用户训练自己的基线模型与Lora模型,进行一定的风格变换。
[English](./README_en.md) | [简体中文](./README.md)
# 模型地址
EasyAnimateV5.1:
7B:
| 名称 | 种类 | 存储空间 | Hugging Face | Model Scope | 描述 |
|--|--|--|--|--|--|
| EasyAnimateV5.1-7b-zh-InP | EasyAnimateV5.1 | 30 GB | [🤗Link](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-7b-zh-InP) | [😄Link](https://modelscope.cn/models/PAI/EasyAnimateV5.1-7b-zh-InP)| 官方的图生视频权重。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以49帧、每秒8帧进行训练,支持多语言预测 |
| EasyAnimateV5.1-7b-zh-Control | EasyAnimateV5.1 | 30 GB | [🤗Link](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-7b-zh-Control) | [😄Link](https://modelscope.cn/models/PAI/EasyAnimateV5.1-7b-zh-Control)| 官方的视频控制权重,支持不同的控制条件,如Canny、Depth、Pose、MLSD等,同时支持使用轨迹控制。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以49帧、每秒8帧进行训练,支持多语言预测 |
| EasyAnimateV5.1-7b-zh-Control-Camera | EasyAnimateV5.1 | 30 GB | [🤗Link](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-7b-zh-Control-Camera) | [😄Link](https://modelscope.cn/models/PAI/EasyAnimateV5.1-7b-zh-Control-Camera)| 官方的视频相机控制权重,支持通过输入相机运动轨迹控制生成方向。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以49帧、每秒8帧进行训练,支持多语言预测 |
| EasyAnimateV5.1-7b-zh | EasyAnimateV5.1 | 30 GB | [🤗Link](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-7b-zh) | [😄Link](https://modelscope.cn/models/PAI/EasyAnimateV5.1-7b-zh)| 官方的文生视频权重。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以49帧、每秒8帧进行训练,支持多语言预测 |
12B:
| 名称 | 种类 | 存储空间 | Hugging Face | Model Scope | 描述 |
|--|--|--|--|--|--|
| EasyAnimateV5.1-12b-zh-InP | EasyAnimateV5.1 | 39 GB | [🤗Link](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-InP) | [😄Link](https://modelscope.cn/models/PAI/EasyAnimateV5.1-12b-zh-InP)| 官方的图生视频权重。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以49帧、每秒8帧进行训练,支持多语言预测 |
| EasyAnimateV5.1-12b-zh-Control | EasyAnimateV5.1 | 39 GB | [🤗Link](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-Control) | [😄Link](https://modelscope.cn/models/PAI/EasyAnimateV5.1-12b-zh-Control)| 官方的视频控制权重,支持不同的控制条件,如Canny、Depth、Pose、MLSD等,同时支持使用轨迹控制。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以49帧、每秒8帧进行训练,支持多语言预测 |
| EasyAnimateV5.1-12b-zh-Control-Camera | EasyAnimateV5.1 | 39 GB | [🤗Link](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-Control-Camera) | [😄Link](https://modelscope.cn/models/PAI/EasyAnimateV5.1-12b-zh-Control-Camera)| 官方的视频相机控制权重,支持通过输入相机运动轨迹控制生成方向。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以49帧、每秒8帧进行训练,支持多语言预测 |
| EasyAnimateV5.1-12b-zh | EasyAnimateV5.1 | 39 GB | [🤗Link](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh) | [😄Link](https://modelscope.cn/models/PAI/EasyAnimateV5.1-12b-zh)| 官方的文生视频权重。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以49帧、每秒8帧进行训练,支持多语言预测 |
# 视频作品
### 图生视频 EasyAnimateV5.1-12b-zh-InP
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
<tr>
<td>
<video src="https://github.com/user-attachments/assets/74a23109-f555-4026-a3d8-1ac27bb3884c" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/ab5aab27-fbd7-4f55-add9-29644125bde7" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/238043c2-cdbd-4288-9857-a273d96f021f" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/48881a0e-5513-4482-ae49-13a0ad7a2557" width="100%" controls autoplay loop></video>
</td>
</tr>
</table>
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
<tr>
<td>
<video src="https://github.com/user-attachments/assets/3e7aba7f-6232-4f39-80a8-6cfae968f38c" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/986d9f77-8dc3-45fa-bc9d-8b26023fffbc" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/7f62795a-2b3b-4c14-aeb1-1230cb818067" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/b581df84-ade1-4605-a7a8-fd735ce3e222" width="100%" controls autoplay loop></video>
</td>
</tr>
</table>
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
<tr>
<td>
<video src="https://github.com/user-attachments/assets/eab1db91-1082-4de2-bb0a-d97fd25ceea1" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/3fda0e96-c1a8-4186-9c4c-043e11420f05" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/4b53145d-7e98-493a-83c9-4ea4f5b58289" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/75f7935f-17a8-4e20-b24c-b61479cf07fc" width="100%" controls autoplay loop></video>
</td>
</tr>
</table>
### 文生视频 EasyAnimateV5.1-12b-zh
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
<tr>
<td>
<video src="https://github.com/user-attachments/assets/8818dae8-e329-4b08-94fa-00d923f38fd2" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/d3e483c3-c710-47d2-9fac-89f732f2260a" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/4dfa2067-d5d4-4741-a52c-97483de1050d" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/fb44c2db-82c6-427e-9297-97dcce9a4948" width="100%" controls autoplay loop></video>
</td>
</tr>
</table>
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
<tr>
<td>
<video src="https://github.com/user-attachments/assets/dc6b8eaf-f21b-4576-a139-0e10438f20e4" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/b3f8fd5b-c5c8-44ee-9b27-49105a08fbff" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/a68ed61b-eed3-41d2-b208-5f039bf2788e" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/4e33f512-0126-4412-9ae8-236ff08bcd21" width="100%" controls autoplay loop></video>
</td>
</tr>
</table>
### 控制生视频 EasyAnimateV5.1-12b-zh-Control
轨迹控制
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
<tr>
<td>
<video src="https://github.com/user-attachments/assets/bf3b8970-ca7b-447f-8301-72dfe028055b" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/63a7057b-573e-4f73-9d7b-8f8001245af4" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/090ac2f3-1a76-45cf-abe5-4e326113389b" width="100%" controls autoplay loop></video>
</td>
<tr>
</table>
普通控制生视频(Canny、Pose、Depth等)
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
<tr>
<td>
<video src="https://github.com/user-attachments/assets/53002ce2-dd18-4d4f-8135-b6f68364cabd" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/fce43c0b-81fa-4ab2-9ca7-78d786f520e6" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/b208b92c-5add-4ece-a200-3dbbe47b93c3" width="100%" controls autoplay loop></video>
</td>
<tr>
<td>
<video src="https://github.com/user-attachments/assets/3aec95d5-d240-49fb-a9e9-914446c7a4cf" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/60fa063b-5c1f-485f-b663-09bd6669de3f" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/4adde728-8397-42f3-8a2a-23f7b39e9a1e" width="100%" controls autoplay loop></video>
</td>
</tr>
</table>
### 相机镜头控制 EasyAnimateV5.1-12b-zh-Control-Camera
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
<tr>
<td>
Pan Up
</td>
<td>
Pan Left
</td>
<td>
Pan Right
</td>
<tr>
<td>
<video src="https://github.com/user-attachments/assets/a88f81da-e263-4038-a5b3-77b26f79719e" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/e346c59d-7bca-4253-97fb-8cbabc484afb" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/4de470d4-47b7-46e3-82d3-b714a2f6aef6" width="100%" controls autoplay loop></video>
</td>
<tr>
<td>
Pan Down
</td>
<td>
Pan Up + Pan Left
</td>
<td>
Pan Up + Pan Right
</td>
<tr>
<td>
<video src="https://github.com/user-attachments/assets/7a3fecc2-d41a-4de3-86cd-5e19aea34a0d" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/cb281259-28b6-448e-a76f-643c3465672e" width="100%" controls autoplay loop></video>
</td>
<td>
<video src="https://github.com/user-attachments/assets/44faf5b6-d83c-4646-9436-971b2b9c7216" width="100%" controls autoplay loop></video>
</td>
</tr>
</table>
# 如何使用
#### a、显存节省方案
由于EasyAnimateV5和V5.1的参数非常大,我们需要考虑显存节省方案,以节省显存适应消费级显卡。我们给每个预测文件都提供了GPU_memory_mode,可以在model_cpu_offload,model_cpu_offload_and_qfloat8,sequential_cpu_offload中进行选择。
- model_cpu_offload代表整个模型在使用后会进入cpu,可以节省部分显存。
- model_cpu_offload_and_qfloat8代表整个模型在使用后会进入cpu,并且对transformer模型进行了float8的量化,可以节省更多的显存。
- sequential_cpu_offload代表模型的每一层在使用后会进入cpu,速度较慢,节省大量显存。
qfloat8会降低模型的性能,但可以节省更多的显存。如果显存足够,推荐使用model_cpu_offload。
#### b、通过comfyui
具体查看[ComfyUI README](https://github.com/aigc-apps/EasyAnimate/blob/main/comfyui/README.md)。
#### c、运行python文件
- 步骤1:下载对应[权重](#model-zoo)放入models文件夹。
- 步骤2:根据不同的权重与预测目标使用不同的文件进行预测。
- 文生视频:
- 使用predict_t2v.py文件中修改prompt、neg_prompt、guidance_scale和seed。
- 而后运行predict_t2v.py文件,等待生成结果,结果保存在samples/easyanimate-videos文件夹中。
- 图生视频:
- 使用predict_i2v.py文件中修改validation_image_start、validation_image_end、prompt、neg_prompt、guidance_scale和seed。
- validation_image_start是视频的开始图片,validation_image_end是视频的结尾图片。
- 而后运行predict_i2v.py文件,等待生成结果,结果保存在samples/easyanimate-videos_i2v文件夹中。
- 视频生视频:
- 使用predict_v2v.py文件中修改validation_video、validation_image_end、prompt、neg_prompt、guidance_scale和seed。
- validation_video是视频生视频的参考视频。您可以使用以下视频运行演示:[演示视频](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1/play_guitar.mp4)
- 而后运行predict_v2v.py文件,等待生成结果,结果保存在samples/easyanimate-videos_v2v文件夹中。
- 普通控制生视频(Canny、Pose、Depth等):
- 使用predict_v2v_control.py文件中修改control_video、validation_image_end、prompt、neg_prompt、guidance_scale和seed。
- control_video是控制生视频的控制视频,是使用Canny、Pose、Depth等算子提取后的视频。您可以使用以下视频运行演示:[演示视频](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1.1/pose.mp4)
- 而后运行predict_v2v_control.py文件,等待生成结果,结果保存在samples/easyanimate-videos_v2v_control文件夹中。
- 轨迹控制视频:
- 使用predict_v2v_control.py文件中修改control_video、ref_image、validation_image_end、prompt、neg_prompt、guidance_scale和seed。
- control_video是轨迹控制视频的控制视频,ref_image是参考的首帧图片。您可以使用以下图片和控制视频运行演示:[演示图像](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v5.1/dog.png),[演示视频](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v5.1/trajectory_demo.mp4)
- 而后运行predict_v2v_control.py文件,等待生成结果,结果保存在samples/easyanimate-videos_v2v_control文件夹中。
- 推荐使用ComfyUI进行交互。
- 相机控制视频:
- 使用predict_v2v_control.py文件中修改control_video、ref_image、validation_image_end、prompt、neg_prompt、guidance_scale和seed。
- control_camera_txt是相机控制视频的控制文件,ref_image是参考的首帧图片。您可以使用以下图片和控制视频运行演示:[演示图像](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1/firework.png),[演示文件(来自于CameraCtrl)](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v5.1/0a3b5fb184936a83.txt)
- 而后运行predict_v2v_control.py文件,等待生成结果,结果保存在samples/easyanimate-videos_v2v_control文件夹中。
- 推荐使用ComfyUI进行交互。
- 步骤3:如果想结合自己训练的其他backbone与Lora,则看情况修改predict_t2v.py中的predict_t2v.py和lora_path。
#### d、通过ui界面
webui支持文生视频、图生视频、视频生视频和普通控制生视频(Canny、Pose、Depth等)
- 步骤1:下载对应[权重](#model-zoo)放入models文件夹。
- 步骤2:运行app.py文件,进入gradio页面。
- 步骤3:根据页面选择生成模型,填入prompt、neg_prompt、guidance_scale和seed等,点击生成,等待生成结果,结果保存在sample文件夹中。
# 快速启动
### 1. 云使用: AliyunDSW/Docker
#### a. 通过阿里云 DSW
DSW 有免费 GPU 时间,用户可申请一次,申请后3个月内有效。
阿里云在[Freetier](https://free.aliyun.com/?product=9602825&crowd=enterprise&spm=5176.28055625.J_5831864660.1.e939154aRgha4e&scm=20140722.M_9974135.P_110.MO_1806-ID_9974135-MID_9974135-CID_30683-ST_8512-V_1)提供免费GPU时间,获取并在阿里云PAI-DSW中使用,5分钟内即可启动EasyAnimate
[](https://gallery.pai-ml.com/#/preview/deepLearning/cv/easyanimate_v5)
#### b. 通过ComfyUI
我们的ComfyUI界面如下,具体查看[ComfyUI README](https://github.com/aigc-apps/EasyAnimate/blob/main/comfyui/README.md)。

#### c. 通过docker
使用docker的情况下,请保证机器中已经正确安装显卡驱动与CUDA环境,然后以此执行以下命令:
```
# pull image
docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate
# enter image
docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:easyanimate
# clone code
git clone https://github.com/aigc-apps/EasyAnimate.git
# enter EasyAnimate's dir
cd EasyAnimate
# download weights
mkdir models/Diffusion_Transformer
mkdir models/Motion_Module
mkdir models/Personalized_Model
# Please use the hugginface link or modelscope link to download the EasyAnimateV5.1 model.
# https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-InP
# https://modelscope.cn/models/PAI/EasyAnimateV5.1-12b-zh-InP
```
### 2. 本地安装: 环境检查/下载/安装
#### a. 环境检查
我们已验证EasyAnimate可在以下环境中执行:
Windows 的详细信息:
- 操作系统 Windows 10
- python: python3.10 & python3.11
- pytorch: torch2.2.0
- CUDA: 11.8 & 12.1
- CUDNN: 8+
- GPU: Nvidia-3060 12G
Linux 的详细信息:
- 操作系统 Ubuntu 20.04, CentOS
- python: python3.10 & python3.11
- pytorch: torch2.2.0
- CUDA: 11.8 & 12.1
- CUDNN: 8+
- GPU:Nvidia-V100 16G & Nvidia-A10 24G & Nvidia-A100 40G & Nvidia-A100 80G
我们需要大约 60GB 的可用磁盘空间,请检查!
EasyAnimateV5.1-12B的视频大小可以由不同的GPU Memory生成,包括:
| GPU memory |384x672x25|384x672x49|576x1008x25|576x1008x49|768x1344x25|768x1344x49|
|----------|----------|----------|----------|----------|----------|----------|
| 16GB | 🧡 | ⭕️ | ⭕️ | ⭕️ | ❌ | ❌ |
| 24GB | 🧡 | 🧡 | 🧡 | 🧡 | 🧡 | ❌ |
| 40GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| 80GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
EasyAnimateV5.1-7B的视频大小可以由不同的GPU Memory生成,包括:
| GPU memory |384x672x25|384x672x49|576x1008x25|576x1008x49|768x1344x25|768x1344x49|
|----------|----------|----------|----------|----------|----------|----------|
| 16GB | 🧡 | 🧡 | ⭕️ | ⭕️ | ❌ | ❌ |
| 24GB | ✅ | ✅ | ✅ | 🧡 | 🧡 | ❌ |
| 40GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| 80GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
✅ 表示它可以在"model_cpu_offload"的情况下运行,🧡代表它可以在"model_cpu_offload_and_qfloat8"的情况下运行,⭕️ 表示它可以在"sequential_cpu_offload"的情况下运行,❌ 表示它无法运行。请注意,使用sequential_cpu_offload运行会更慢。
有一些不支持torch.bfloat16的卡型,如2080ti、V100,需要将app.py、predict文件中的weight_dtype修改为torch.float16才可以运行。
EasyAnimateV5.1-12B使用不同GPU在25个steps中的生成时间如下:
| GPU |384x672x72|384x672x49|576x1008x25|576x1008x49|768x1344x25|768x1344x49|
|----------|----------|----------|----------|----------|----------|----------|
| A10 24GB |约120秒 (4.8s/it)|约240秒 (9.6s/it)|约320秒 (12.7s/it)| 约750秒 (29.8s/it)| ❌ | ❌ |
| A100 80GB |约45秒 (1.75s/it)|约90秒 (3.7s/it)|约120秒 (4.7s/it)|约300秒 (11.4s/it)|约265秒 (10.6s/it)| 约710秒 (28.3s/it)|
#### b. 权重放置
我们最好将[权重](#model-zoo)按照指定路径进行放置:
EasyAnimateV5.1:
```
📦 models/
├── 📂 Diffusion_Transformer/
│ ├── 📂 EasyAnimateV5.1-12b-zh-InP/
│ ├── 📂 EasyAnimateV5.1-12b-zh-Control/
│ ├── 📂 EasyAnimateV5.1-12b-zh-Control-Camera/
│ └── 📂 EasyAnimateV5.1-12b-zh/
├── 📂 Personalized_Model/
│ └── your trained trainformer model / your trained lora model (for UI load)
```
# 联系我们
1. 扫描下方二维码或搜索群号:77450006752 来加入钉钉群。
2. 扫描下方二维码来加入微信群(如果二维码失效,可扫描最右边同学的微信,邀请您入群)
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/dd.png" alt="ding group" width="30%"/>
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/wechat.jpg" alt="Wechat group" width="30%"/>
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/person.jpg" alt="Person" width="30%"/>
# 参考文献
- CogVideo: https://github.com/THUDM/CogVideo/
- Flux: https://github.com/black-forest-labs/flux
- magvit: https://github.com/google-research/magvit
- PixArt: https://github.com/PixArt-alpha/PixArt-alpha
- Open-Sora-Plan: https://github.com/PKU-YuanGroup/Open-Sora-Plan
- Open-Sora: https://github.com/hpcaitech/Open-Sora
- Animatediff: https://github.com/guoyww/AnimateDiff
- HunYuan DiT: https://github.com/tencent/HunyuanDiT
- ComfyUI-KJNodes: https://github.com/kijai/ComfyUI-KJNodes
- ComfyUI-EasyAnimateWrapper: https://github.com/kijai/ComfyUI-EasyAnimateWrapper
- ComfyUI-CameraCtrl-Wrapper: https://github.com/chaojie/ComfyUI-CameraCtrl-Wrapper
- CameraCtrl: https://github.com/hehao13/CameraCtrl
- DragAnything: https://github.com/showlab/DragAnything
# 许可证
本项目采用 [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).
|
lightsout19/t5-base-mrpc | lightsout19 | "2024-11-07T10:11:24Z" | 50 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-07T10:02:48Z" | ---
library_name: transformers
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: t5-base-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-mrpc
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4484
- Accuracy: 0.8064
- F1: 0.8724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 115 | 0.5770 | 0.7083 | 0.8232 |
| No log | 2.0 | 230 | 0.5076 | 0.7279 | 0.8295 |
| No log | 3.0 | 345 | 0.4870 | 0.7672 | 0.8523 |
| No log | 4.0 | 460 | 0.4481 | 0.8039 | 0.8706 |
| 0.5245 | 5.0 | 575 | 0.4484 | 0.8064 | 0.8724 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
mingxilei/gpt2-imdb-eps-0.1 | mingxilei | "2024-12-18T11:29:13Z" | 148 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-18T10:55:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kakashi/rl_training | Kakashi | "2023-11-25T20:37:34Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-25T20:35:29Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.19 +/- 21.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
noesis-ai/Fimbulvetr-11B-v2-GGUF | noesis-ai | "2025-02-04T18:44:54Z" | 55 | 0 | null | [
"gguf",
"en",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"base_model:quantized:Sao10K/Fimbulvetr-11B-v2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-04T18:44:53Z" | ---
language:
- en
license: cc-by-nc-4.0
base_model: Sao10K/Fimbulvetr-11B-v2
model_name: Fimbulvetr-11B-v2-GGUF
quantized_by: brooketh
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Fimbulvetr 11B v2
- **Creator:** [Sao10K](https://huggingface.co/Sao10K/)
- **Original:** [Fimbulvetr 11B v2](https://huggingface.co/models/base/Fimbulvetr-11B-v2)
- **Date Created:** 2024-02-06
- **Trained Context:** 4096 tokens
- **Description:** Updated version of Fimbulvetr, a roleplaying model that is good at following context, realistically portraying characters, and responding creatively. Performs especially well for its size.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
Sengil/ModernBERT-NewsClassifier-EN-small | Sengil | "2025-01-19T17:18:52Z" | 22 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"news-classification",
"english",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-19T12:07:13Z" | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
- text-classification
- news-classification
- english
- modernbert
metrics:
- f1
model-index:
- name: ModernBERT-NewsClassifier-EN-small
results: []
---
# ModernBERT-NewsClassifier-EN-small
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an English **News Category** dataset covering 15 distinct topics (e.g., **Politics**, **Sports**, **Business**, etc.). It achieves the following results on the evaluation set:
- **Validation Loss**: `3.1201`
- **Weighted F1 Score**: `0.5475`
---
## Model Description
**Architecture**: This model is based on [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base), an advanced Transformer architecture featuring Rotary Position Embeddings (RoPE), Flash Attention, and a native long context window (up to 8,192 tokens). For the classification task, a linear classification head is added on top of the BERT encoder outputs.
**Task**: **Multi-class News Classification**
- The model classifies English news headlines or short texts into one of 15 categories.
**Use Cases**:
- Automatically tagging news headlines with appropriate categories in editorial pipelines.
- Classifying short text blurbs for social media or aggregator systems.
- Building a quick filter for content-based recommendation engines.
---
## Intended Uses & Limitations
- **Intended for**: Users who need to categorize short English news texts into broad topics.
- **Language**: Trained primarily on **English** texts. Performance on non-English text is not guaranteed.
- **Limitations**:
- Certain categories (e.g., `BLACK VOICES`, `QUEER VOICES`) may contain nuanced language that could lead to misclassification if context is limited or if the text is ambiguous.
---
## Training and Evaluation Data
- **Dataset**: Curated from an English news-category dataset with 15 labels (e.g., `POLITICS`, `ENTERTAINMENT`, `SPORTS`, `BUSINESS`, etc.).
- **Data Size**: ~30,000 samples in total, balanced at 2,000 samples per category.
- **Split**: 90% training (27,000 samples) and 10% testing (3,000 samples).
### Categories
1. POLITICS
2. WELLNESS
3. ENTERTAINMENT
4. TRAVEL
5. STYLE & BEAUTY
6. PARENTING
7. HEALTHY LIVING
8. QUEER VOICES
9. FOOD & DRINK
10. BUSINESS
11. COMEDY
12. SPORTS
13. BLACK VOICES
14. HOME & LIVING
15. PARENTS
---
## Training Procedure
### Hyperparameters
| Hyperparameter | Value |
|------------------------------:|:-----------------------|
| **learning_rate** | 5e-05 |
| **train_batch_size** | 8 |
| **eval_batch_size** | 4 |
| **seed** | 42 |
| **gradient_accumulation_steps** | 2 |
| **total_train_batch_size** | 16 (8 x 2) |
| **optimizer** | `adamw_torch_fused` (betas=(0.9,0.999), epsilon=1e-08) |
| **lr_scheduler_type** | linear |
| **lr_scheduler_warmup_steps**| 100 |
| **num_epochs** | 5 |
**Optimizer**: Used `AdamW` with fused kernels (`adamw_torch_fused`) for efficiency.
**Loss Function**: Cross-entropy (with weighted F1 as metric).
---
## Training Results
| Training Loss | Epoch | Step | Validation Loss | F1 (Weighted) |
|:-------------:|:------:|:----:|:---------------:|:-------------:|
| 2.6251 | 1.0 | 1688 | 1.3810 | 0.5543 |
| 1.9267 | 2.0 | 3376 | 1.4378 | 0.5588 |
| 0.6349 | 3.0 | 5064 | 2.1705 | 0.5415 |
| 0.1273 | 4.0 | 6752 | 2.9007 | 0.5402 |
| 0.0288 | 4.9973 | 8435 | 3.1201 | 0.5475 |
- **Best Weighted F1** observed near the final epochs is **~0.55** on the validation set.
---
## Inference Example
Below are two ways to use this model: via a **pipeline** and by using the **model & tokenizer** directly.
### 1) Quick Start with `pipeline`
```python
from transformers import pipeline
# Instantiate the pipeline
classifier = pipeline(
"text-classification",
model="Sengil/ModernBERT-NewsClassifier-EN-small"
)
# Sample text
text = "The President pledges new infrastructure initiatives amid economic concerns."
outputs = classifier(text)
# Output: [{'label': 'POLITICS', 'score': 0.95}, ...]
print(outputs)
```
### 2) Direct Model Usage
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "Sengil/ModernBERT-NewsClassifier-EN-small"
# Load model & tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
sample_text = "Local authorities call for better healthcare policies."
inputs = tokenizer(sample_text, return_tensors="pt", truncation=True, max_length=512)
with torch.no_grad():
logits = model(**inputs).logits
# Convert logits to probabilities
probs = F.softmax(logits, dim=1)[0]
predicted_label_id = torch.argmax(probs).item()
# Get the label string
id2label = model.config.id2label
predicted_label = id2label[predicted_label_id]
confidence_score = probs[predicted_label_id].item()
print(f"Predicted Label: {predicted_label} | Score: {confidence_score:.4f}")
```
---
## Additional Information
- **Framework Versions**:
- **Transformers**: 4.49.0.dev0
- **PyTorch**: 2.5.1+cu121
- **Datasets**: 3.2.0
- **Tokenizers**: 0.21.0
- **License**: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Intellectual Property**: The original ModernBERT base model is provided by [answerdotai](https://huggingface.co/answerdotai). This fine-tuned checkpoint inherits the same license.
---
**Citation** (If you use or extend this model in your research or applications, please consider citing it):
```
@misc{ModernBERTNewsClassifierENsmall,
title={ModernBERT-NewsClassifier-EN-small},
author={Mert Sengil},
year={2025},
howpublished={\url{https://huggingface.co/Sengil/ModernBERT-NewsClassifier-EN-small}},
}
``` |
rayonlabs/Llama-3_2-3B-d1808b9d666dadb7_dataset_json_X-Amz-Algorithm_AWS4-HMAC-SHA256_X-Amz-Credential_AKI | rayonlabs | "2025-03-26T03:08:59Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"region:us"
] | null | "2025-03-26T03:08:58Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0419afd4-d740-497b-8afb-e4c2d4a21d7c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9b4cab2992cdb07f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9b4cab2992cdb07f_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/0419afd4-d740-497b-8afb-e4c2d4a21d7c
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1779
micro_batch_size: 4
mlflow_experiment_name: /tmp/9b4cab2992cdb07f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.033916240452578315
wandb_entity: null
wandb_mode: online
wandb_name: 30612698-6e4a-41b9-a416-94b6b03904c8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 30612698-6e4a-41b9-a416-94b6b03904c8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0419afd4-d740-497b-8afb-e4c2d4a21d7c
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1779
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4843 | 0.0002 | 1 | 1.7253 |
| 0.9523 | 0.0225 | 100 | 0.8943 |
| 0.9478 | 0.0449 | 200 | 0.8860 |
| 0.9653 | 0.0674 | 300 | 0.8736 |
| 0.7676 | 0.0899 | 400 | 0.8678 |
| 0.9193 | 0.1123 | 500 | 0.8550 |
| 0.9976 | 0.1348 | 600 | 0.8524 |
| 1.0777 | 0.1573 | 700 | 0.8414 |
| 0.892 | 0.1797 | 800 | 0.8353 |
| 1.0606 | 0.2022 | 900 | 0.8260 |
| 0.9507 | 0.2247 | 1000 | 0.8176 |
| 0.8807 | 0.2471 | 1100 | 0.8107 |
| 1.0414 | 0.2696 | 1200 | 0.8034 |
| 0.9991 | 0.2921 | 1300 | 0.7983 |
| 0.8961 | 0.3146 | 1400 | 0.7936 |
| 0.9268 | 0.3370 | 1500 | 0.7911 |
| 0.7766 | 0.3595 | 1600 | 0.7893 |
| 0.9304 | 0.3820 | 1700 | 0.7884 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nold/Phi-3-mini-4k-instruct-function-calling-GGUF | nold | "2024-05-22T12:44:59Z" | 77 | 6 | null | [
"gguf",
"dataset:mzbac/function-calling-phi-3-format-v1.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-21T18:16:48Z" | ---
datasets:
- mzbac/function-calling-phi-3-format-v1.1
---
# Model
Fine-tuned the Phi3 instruction model for function calling via MLX-LM using https://huggingface.co/datasets/mzbac/function-calling-phi-3-format-v1.1
# Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "mzbac/Phi-3-mini-4k-instruct-function-calling"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tool = {
"name": "search_web",
"description": "Perform a web search for a given search terms.",
"parameter": {
"type": "object",
"properties": {
"search_terms": {
"type": "array",
"items": {"type": "string"},
"description": "The search queries for which the search is performed.",
"required": True,
}
},
},
}
messages = [
{
"role": "user",
"content": f"You are a helpful assistant with access to the following functions. Use them if required - {str(tool)}",
},
{"role": "user", "content": "Any news in Melbourne today, May 7, 2024?"},
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|end|>")]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.1,
)
response = outputs[0]
print(tokenizer.decode(response))
# <s><|user|> You are a helpful assistant with access to the following functions. Use them if required - {'name': 'search_web', 'description': 'Perform a web search for a given search terms.', 'parameter': {'type': 'object', 'properties': {'search_terms': {'type': 'array', 'items': {'type': 'string'}, 'description': 'The search queries for which the search is performed.', 'required': True}}}}<|end|><|assistant|>
# <|user|> Any news in Melbourne today, May 7, 2024?<|end|>
# <|assistant|> <functioncall> {"name": "search_web", "arguments": {"search_terms": ["news", "Melbourne", "May 7, 2024"]}}<|end|>
```
# Training hyperparameters
lora_config.yaml
```yaml
# The path to the local model directory or Hugging Face repo.
model: "microsoft/Phi-3-mini-4k-instruct"
# Whether or not to train (boolean)
train: true
# Directory with {train, valid, test}.jsonl files
data: "data"
# The PRNG seed
seed: 0
# Number of layers to fine-tune
lora_layers: 32
# Minibatch size.
batch_size: 1
# Iterations to train for.
iters: 111000
# Number of validation batches, -1 uses the entire validation set.
val_batches: -1
# Adam learning rate.
learning_rate: 1e-6
# Number of training steps between loss reporting.
steps_per_report: 10
# Number of training steps between validations.
steps_per_eval: 200
# Load path to resume training with the given adapter weights.
# resume_adapter_file: "adapters/adapters.safetensors"
# Save/load path for the trained adapter weights.
adapter_path: "adapters"
# Save the model every N iterations.
save_every: 1000
# Evaluate on the test set after training
test: false
# Number of test set batches, -1 uses the entire test set.
test_batches: 100
# Maximum sequence length.
max_seq_length: 4096
# Use gradient checkpointing to reduce memory use.
grad_checkpoint: false
# LoRA parameters can only be specified in a config file
lora_parameters:
# The layer keys to apply LoRA to.
# These will be applied for the last lora_layers
keys: ['mlp.down_proj','mlp.gate_up_proj','self_attn.qkv_proj','self_attn.o_proj']
rank: 128
alpha: 256
scale: 10.0
dropout: 0.05
```
***
Quantization of Model [mzbac/Phi-3-mini-4k-instruct-function-calling](https://huggingface.co/mzbac/Phi-3-mini-4k-instruct-function-calling).
Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
|
Chuanming/Alpaca-Gemma-7b-lora | Chuanming | "2024-02-27T13:20:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-02-27T13:20:38Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** Chuanming
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nicolinesorensen/EmotionsFinetunedAssignment1 | nicolinesorensen | "2024-10-03T10:32:07Z" | 9 | 0 | null | [
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | "2024-10-02T12:25:59Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: EmotionsFinetunedAssignment1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EmotionsFinetunedAssignment1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1436
- Accuracy: 0.933
- F1: 0.9334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7879 | 1.0 | 250 | 0.2582 | 0.9165 | 0.9147 |
| 0.2039 | 2.0 | 500 | 0.1785 | 0.9295 | 0.9296 |
| 0.1366 | 3.0 | 750 | 0.1537 | 0.933 | 0.9335 |
| 0.1073 | 4.0 | 1000 | 0.1481 | 0.932 | 0.9314 |
| 0.0888 | 5.0 | 1250 | 0.1436 | 0.933 | 0.9334 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
torreygooch/PPO-Lunar_Lander | torreygooch | "2023-04-09T12:57:34Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-09T12:57:11Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.77 +/- 22.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BobMcDear/vit_large_patch32_224_in22k | BobMcDear | "2022-12-23T13:55:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-12-13T23:11:13Z" | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
erfan226/MTD | erfan226 | "2023-12-16T16:21:31Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-12-16T16:08:39Z" | ---
widget:
- text: "The largest mountain on Earth, when measured from base to summit, is Mauna Kea in Hawaii. While Mount Everest is the highest mountain above sea level, Mauna Kea extends about 13,796 feet (4,205 meters) above sea level, but its base is deep beneath the ocean surface. When measured from its base on the ocean floor to its summit, Mauna Kea is over 33,500 feet (10,210 meters) tall, making it taller than Mount Everest."
--- |
RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf | RichardErkhov | "2024-07-30T10:49:26Z" | 13 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-30T06:55:33Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DataVortexS-10.7B-dpo-v1.5 - GGUF
- Model creator: https://huggingface.co/Edentns/
- Original model: https://huggingface.co/Edentns/DataVortexS-10.7B-dpo-v1.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DataVortexS-10.7B-dpo-v1.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q2_K.gguf) | Q2_K | 3.73GB |
| [DataVortexS-10.7B-dpo-v1.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [DataVortexS-10.7B-dpo-v1.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [DataVortexS-10.7B-dpo-v1.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [DataVortexS-10.7B-dpo-v1.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [DataVortexS-10.7B-dpo-v1.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q3_K.gguf) | Q3_K | 4.84GB |
| [DataVortexS-10.7B-dpo-v1.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [DataVortexS-10.7B-dpo-v1.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [DataVortexS-10.7B-dpo-v1.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [DataVortexS-10.7B-dpo-v1.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q4_0.gguf) | Q4_0 | 5.66GB |
| [DataVortexS-10.7B-dpo-v1.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [DataVortexS-10.7B-dpo-v1.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [DataVortexS-10.7B-dpo-v1.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q4_K.gguf) | Q4_K | 6.02GB |
| [DataVortexS-10.7B-dpo-v1.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [DataVortexS-10.7B-dpo-v1.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q4_1.gguf) | Q4_1 | 6.27GB |
| [DataVortexS-10.7B-dpo-v1.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q5_0.gguf) | Q5_0 | 6.89GB |
| [DataVortexS-10.7B-dpo-v1.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [DataVortexS-10.7B-dpo-v1.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q5_K.gguf) | Q5_K | 7.08GB |
| [DataVortexS-10.7B-dpo-v1.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [DataVortexS-10.7B-dpo-v1.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q5_1.gguf) | Q5_1 | 7.51GB |
| [DataVortexS-10.7B-dpo-v1.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q6_K.gguf) | Q6_K | 8.2GB |
| [DataVortexS-10.7B-dpo-v1.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.5-gguf/blob/main/DataVortexS-10.7B-dpo-v1.5.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
tags:
- text-generation
license: cc-by-nc-sa-4.0
language:
- ko
base_model: megastudy/M-SOLAR-10.7B-v1.3
pipeline_tag: text-generation
---
# **DataVortexS-10.7B-dpo-v1.5**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co/megastudy/M-SOLAR-10.7B-v1.3)
### **Trained On**
- **OS**: Ubuntu 22.04
- **GPU**: H100 80GB 4ea
- **transformers**: v4.36.2
### **Instruction format**
It follows **ChatML** format.
E.g.
```python
text = """\
<|im_start|>system
당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다.<|im_end|>
<|im_start|>user
대한민국의 수도는 어디야?<|im_end|>
<|im_start|>assistant
대한민국의 수도는 서울입니다.<|im_end|>
<|im_start|>user
서울 인구는 총 몇 명이야?<|im_end|>
<|im_start|>assistant
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | -----------: | -----------: | -----------: | -----------: |
| kobest_boolq | 0.34687 | 0.930158 | 0.943013 | 0.938029 |
| kobest_copa | 0.693351 | 0.751805 | 0.75772 | 0.771704 |
| kobest_hellaswag | 0.480736 | 0.470852 | 0.474766 | 0.478576 |
| kobest_sentineg | 0.789423 | 0.962208 | 0.967241 | 0.964717 |
| **Average** | **0.577595** | **0.778756** | **0.785685** | **0.788257** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 55.32 | 52.13 | 61.27 | 53.99 | 49.71 | 59.5 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.5")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.5")
messages = [
{"role": "system", "content": "당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다."},
{"role": "user", "content": "대한민국의 수도는 어디야?"},
{"role": "assistant", "content": "대한민국의 수도는 서울입니다."},
{"role": "user", "content": "서울 인구는 총 몇 명이야?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
PrunaAI/nayohan-llama3-instrucTrans-enko-8b-HQQ-1bit-smashed | PrunaAI | "2024-07-19T14:25:52Z" | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:nayohan/llama3-instrucTrans-enko-8b",
"base_model:finetune:nayohan/llama3-instrucTrans-enko-8b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-19T14:24:20Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: nayohan/llama3-instrucTrans-enko-8b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo nayohan/llama3-instrucTrans-enko-8b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/nayohan-llama3-instrucTrans-enko-8b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/nayohan-llama3-instrucTrans-enko-8b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("nayohan/llama3-instrucTrans-enko-8b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model nayohan/llama3-instrucTrans-enko-8b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf | RichardErkhov | "2024-06-30T14:40:53Z" | 7 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-30T11:47:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralHermes-2.5-Mistral-7B - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [NeuralHermes-2.5-Mistral-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [NeuralHermes-2.5-Mistral-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [NeuralHermes-2.5-Mistral-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [NeuralHermes-2.5-Mistral-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [NeuralHermes-2.5-Mistral-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [NeuralHermes-2.5-Mistral-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [NeuralHermes-2.5-Mistral-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [NeuralHermes-2.5-Mistral-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [NeuralHermes-2.5-Mistral-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [NeuralHermes-2.5-Mistral-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [NeuralHermes-2.5-Mistral-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [NeuralHermes-2.5-Mistral-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [NeuralHermes-2.5-Mistral-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [NeuralHermes-2.5-Mistral-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [NeuralHermes-2.5-Mistral-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [NeuralHermes-2.5-Mistral-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [NeuralHermes-2.5-Mistral-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [NeuralHermes-2.5-Mistral-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [NeuralHermes-2.5-Mistral-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [NeuralHermes-2.5-Mistral-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [NeuralHermes-2.5-Mistral-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [NeuralHermes-2.5-Mistral-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralHermes-2.5-Mistral-7B-gguf/blob/main/NeuralHermes-2.5-Mistral-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
datasets:
- mlabonne/chatml_dpo_pairs
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: NeuralHermes-2.5-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
---
<center><img src="https://i.imgur.com/qIhaFNM.png"></center>
# NeuralHermes 2.5 - Mistral 7B
NeuralHermes is based on the [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset. It surpasses the original model on most benchmarks (see results).
It is directly inspired by the RLHF process described by [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)'s authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on [Google Colab](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing) and [GitHub](https://github.com/mlabonne/llm-course/tree/main). It required an A100 GPU for about an hour.
## Quantized models
* **GGUF**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GGUF
* **AWQ**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-AWQ
* **GPTQ**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ
* **EXL2**:
* 3.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-3.0bpw-h6-exl2
* 4.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-4.0bpw-h6-exl2
* 5.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-5.0bpw-h6-exl2
* 6.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-6.0bpw-h6-exl2
* 8.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-8.0bpw-h8-exl2
## Results
**Update:** NeuralHermes-2.5 became the best Hermes-based model on the Open LLM leaderboard and one of the very best 7b models. 🎉

Teknium (author of OpenHermes-2.5-Mistral-7B) benchmarked the model ([see his tweet](https://twitter.com/Teknium1/status/1729955709377503660)).
Results are improved on every benchmark: **AGIEval** (from 43.07% to 43.62%), **GPT4All** (from 73.12% to 73.25%), and **TruthfulQA**.
### AGIEval

### GPT4All

### TruthfulQA

You can check the Weights & Biases project [here](https://wandb.ai/mlabonne/DPO/runs/axe71gr0?nw=nwusermlabonne).
## Usage
You can run this model using [LM Studio](https://lmstudio.ai/) or any other frontend.
You can also run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
|
LHRuig/dundermik | LHRuig | "2025-01-16T05:44:22Z" | 5 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-16T05:44:00Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: dundermik
---
# dundermik
<Gallery />
## Model description
dundermik lora
## Trigger words
You should use `dundermik` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/dundermik/tree/main) them in the Files & versions tab.
|
sb3/sac-Pendulum-v1 | sb3 | "2022-10-11T15:09:07Z" | 2,831 | 0 | stable-baselines3 | [
"stable-baselines3",
"Pendulum-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-22T19:55:26Z" | ---
library_name: stable-baselines3
tags:
- Pendulum-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- metrics:
- type: mean_reward
value: -176.33 +/- 101.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pendulum-v1
type: Pendulum-v1
---
# **SAC** Agent playing **Pendulum-v1**
This is a trained model of a **SAC** agent playing **Pendulum-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env Pendulum-v1 -orga sb3 -f logs/
python enjoy.py --algo sac --env Pendulum-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo sac --env Pendulum-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env Pendulum-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('learning_rate', 0.001),
('n_timesteps', 20000),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
YuTingHu/results-mt5-finetuned-squad-accelerate_M3 | YuTingHu | "2023-06-05T13:58:14Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-05T13:17:17Z" | model = AutoModelForSeq2SeqLM.from_pretrained("fabiochiu/t5-small-medium-title-generation")# M3 MODEL
tokenizer = AutoTokenizer.from_pretrained("fabiochiu/t5-small-medium-title-generation")# M3 MODEL |
Lekhansh/Llama-3.1-8B-Instruct-mixed-instructions-merged-16bit | Lekhansh | "2024-11-04T08:42:42Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-04T08:34:02Z" | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Lekhansh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CatBarks/GPT2ES_ClassWeighted100_tokenizer | CatBarks | "2024-02-16T13:22:34Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-16T13:22:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ipipan/nlpre_spacy_nkjp_fasttext_nkjp-by-type | ipipan | "2023-10-03T22:39:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-09-29T17:04:09Z" | # Model description
- Morphosyntactic analyzer: Spacy
- Tagset: NKJP
- Embedding vectors: Fasttext
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_spacy_nkjp_fasttext_nkjp-by-type
```
## Load model
```
import spacy
model_name = './nlpre_spacy_nkjp_fasttext_nkjp-by-type'
model_upos = spacy.load(model_name + '/upos')
model_xpos = spacy.load(model_name + '/xpos+lemma')
``` |
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e20_member_shadow37 | FounderOfHuggingface | "2023-12-06T11:45:30Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-06T11:45:28Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
huggingtweets/furkelpu | huggingtweets | "2023-04-14T02:56:13Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-14T02:56:05Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1641889688129622025/kIk_-vbd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Hau</div>
<div style="text-align: center; font-size: 14px;">@furkelpu</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Hau.
| Data | Hau |
| --- | --- |
| Tweets downloaded | 3189 |
| Retweets | 641 |
| Short tweets | 176 |
| Tweets kept | 2372 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/hp3h3wad/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @furkelpu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/96k5ibgh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/96k5ibgh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/furkelpu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
globalyako/swallowv2-8b-ft-jp-r64_alpha64-sw_te_sy_randomshuffle_s1 | globalyako | "2025-02-17T10:55:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:tokyotech-llm/Llama-3.1-Swallow-8B-v0.2",
"base_model:finetune:tokyotech-llm/Llama-3.1-Swallow-8B-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-17T10:54:34Z" | ---
base_model: tokyotech-llm/Llama-3.1-Swallow-8B-v0.2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** globalyako
- **License:** apache-2.0
- **Finetuned from model :** tokyotech-llm/Llama-3.1-Swallow-8B-v0.2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sail-rvc/Liquid | sail-rvc | "2023-07-14T07:26:30Z" | 2 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:26:19Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Liquid
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:26:30
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
YakovElm/IntelDAOS20Classic_64 | YakovElm | "2023-05-27T06:15:50Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-27T06:15:16Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS20Classic_64
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS20Classic_64
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1354
- Train Accuracy: 0.9610
- Validation Loss: 0.3272
- Validation Accuracy: 0.9099
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2413 | 0.9400 | 0.3377 | 0.9099 | 0 |
| 0.1555 | 0.9610 | 0.3160 | 0.9099 | 1 |
| 0.1354 | 0.9610 | 0.3272 | 0.9099 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gaebalsite/Korean-llama-3.1-8B | gaebalsite | "2024-11-20T18:05:43Z" | 37 | 0 | null | [
"safetensors",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-20T16:29:11Z" | ---
license: apache-2.0
---
|
sai2002/cricket | sai2002 | "2023-03-29T09:52:51Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | "2023-03-29T09:46:48Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template https://www.espncricinfo.com/?ex_cid=google_cpc:display:pmax:ipl2023&gclid=Cj0KCQjww4-hBhCtARIsAC9gR3ZbaUgqfZbFr1p-UkRP0_pCjo0JUQBIg
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0 | anas-awadalla | "2022-02-25T05:30:55Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
McCheng/Taxi-v3 | McCheng | "2023-03-13T16:06:30Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-13T16:06:20Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="McCheng/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits