modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-26 12:27:25
| downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 397
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-26 12:27:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mikegarts/chukotkadb | mikegarts | "2023-01-29T11:38:38Z" | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-29T11:37:06Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: chukotkadb
---
### chukotkadb Dreambooth model trained by mikegarts with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
chukotkadb (use that on your prompt)

|
dada22231/d7a7439d-01a8-4e9d-834e-1bb33ae81bb6 | dada22231 | "2024-12-13T00:02:19Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1",
"license:mit",
"region:us"
] | null | "2024-12-12T23:55:05Z" | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d7a7439d-01a8-4e9d-834e-1bb33ae81bb6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 4
dataset_prepared_path: null
datasets:
- data_files:
- fe45f137661b4726_train_data.json
ds_type: json
format: custom
num_proc: 4
path: /workspace/input_data/fe45f137661b4726_train_data.json
streaming: true
type:
field_input: inputs
field_instruction: instruction
field_output: outputs
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: balanced
do_eval: true
early_stopping_patience: 1
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: true
hub_model_id: dada22231/d7a7439d-01a8-4e9d-834e-1bb33ae81bb6
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
1: 75GB
2: 75GB
3: 75GB
max_steps: 50
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/fe45f137661b4726_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: false
train_on_inputs: false
trust_remote_code: true
val_set_size: 50
wandb_entity: null
wandb_mode: online
wandb_name: d7a7439d-01a8-4e9d-834e-1bb33ae81bb6
wandb_project: Public_TuningSN
wandb_runid: d7a7439d-01a8-4e9d-834e-1bb33ae81bb6
warmup_ratio: 0.04
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# d7a7439d-01a8-4e9d-834e-1bb33ae81bb6
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- training_steps: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.2319 | 1 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rootacess/q-FrozenLake-v1-4x4-noSlippery | rootacess | "2023-01-18T06:47:31Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-18T06:47:27Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rootacess/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Pearson/dqn-SpaceInvadersNoFrameskip-v4 | Pearson | "2023-02-07T03:23:43Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-07T03:22:57Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 619.50 +/- 87.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Pearson -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Pearson -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Pearson
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
VERSIL91/9bcd5e55-7bae-4ebc-8521-e6234aa9c82a | VERSIL91 | "2025-01-06T05:05:52Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:fxmarty/really-tiny-falcon-testing",
"base_model:adapter:fxmarty/really-tiny-falcon-testing",
"license:mit",
"region:us"
] | null | "2025-01-06T05:04:56Z" | ---
library_name: peft
license: mit
base_model: fxmarty/really-tiny-falcon-testing
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9bcd5e55-7bae-4ebc-8521-e6234aa9c82a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: fxmarty/really-tiny-falcon-testing
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 636ea2d6eb2f714b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/636ea2d6eb2f714b_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/9bcd5e55-7bae-4ebc-8521-e6234aa9c82a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 5
micro_batch_size: 2
mlflow_experiment_name: /tmp/636ea2d6eb2f714b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9bcd5e55-7bae-4ebc-8521-e6234aa9c82a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9bcd5e55-7bae-4ebc-8521-e6234aa9c82a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9bcd5e55-7bae-4ebc-8521-e6234aa9c82a
This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 177.0 | 0.0011 | 1 | 11.0648 |
| 177.0 | 0.0023 | 2 | 11.0648 |
| 177.125 | 0.0045 | 4 | 11.0647 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lct-rug-2022/edos-2023-baseline-albert-base-v2-label_vector | lct-rug-2022 | "2022-11-29T22:57:00Z" | 113 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-11-28T21:58:16Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: edos-2023-baseline-albert-base-v2-label_vector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-albert-base-v2-label_vector
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8762
- F1: 0.1946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1002 | 1.18 | 100 | 1.9982 | 0.1023 |
| 1.7832 | 2.35 | 200 | 1.8435 | 0.1310 |
| 1.57 | 3.53 | 300 | 1.8097 | 0.1552 |
| 1.3719 | 4.71 | 400 | 1.8216 | 0.1631 |
| 1.2072 | 5.88 | 500 | 1.8138 | 0.1811 |
| 1.0186 | 7.06 | 600 | 1.8762 | 0.1946 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/sciencebits | huggingtweets | "2021-10-14T08:42:39Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/sciencebits/1634200955730/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1340996475472494593/yqCQjZ06_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Science Bits</div>
<div style="text-align: center; font-size: 14px;">@sciencebits</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Science Bits.
| Data | Science Bits |
| --- | --- |
| Tweets downloaded | 2741 |
| Retweets | 759 |
| Short tweets | 47 |
| Tweets kept | 1935 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22jxh8wi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sciencebits's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/h0qt4tsw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/h0qt4tsw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sciencebits')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jlse/Reinforce-CartPole-v1 | jlse | "2025-02-20T21:28:20Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2025-02-20T21:28:11Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
chnaaam/mikolm | chnaaam | "2025-01-07T06:22:10Z" | 62 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-04T06:23:16Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
amirhosseinbarari/roberta-clef | amirhosseinbarari | "2024-11-27T14:15:10Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-18T16:10:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
This is the fine-tuned roberta-base model on CLEF Dataset
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daniel40/82662f4c-eda8-4700-865c-296eae131178 | daniel40 | "2025-02-13T14:59:01Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-13T11:25:29Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 82662f4c-eda8-4700-865c-296eae131178
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 82662f4c-eda8-4700-865c-296eae131178
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
timm/convformer_s36.sail_in1k_384 | timm | "2025-01-21T19:12:59Z" | 63 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-05-05T06:11:18Z" | ---
tags:
- image-classification
- timm
- transformers
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for convformer_s36.sail_in1k_384
A ConvFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 40.0
- GMACs: 22.5
- Activations (M): 89.6
- Image size: 384 x 384
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convformer_s36.sail_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convformer_s36.sail_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 96, 96])
# torch.Size([1, 128, 48, 48])
# torch.Size([1, 320, 24, 24])
# torch.Size([1, 512, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convformer_s36.sail_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
tensorblock/Chupacabra-7B-v2-GGUF | tensorblock | "2025-01-05T06:04:28Z" | 14 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:perlthoughts/Chupacabra-7B-v2",
"base_model:quantized:perlthoughts/Chupacabra-7B-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-05T05:28:56Z" | ---
license: apache-2.0
base_model: perlthoughts/Chupacabra-7B-v2
tags:
- TensorBlock
- GGUF
model-index:
- name: Chupacabra-7B-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.19
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## perlthoughts/Chupacabra-7B-v2 - GGUF
This repo contains GGUF format model files for [perlthoughts/Chupacabra-7B-v2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Chupacabra-7B-v2-Q2_K.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [Chupacabra-7B-v2-Q3_K_S.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [Chupacabra-7B-v2-Q3_K_M.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [Chupacabra-7B-v2-Q3_K_L.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [Chupacabra-7B-v2-Q4_0.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Chupacabra-7B-v2-Q4_K_S.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [Chupacabra-7B-v2-Q4_K_M.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [Chupacabra-7B-v2-Q5_0.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Chupacabra-7B-v2-Q5_K_S.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [Chupacabra-7B-v2-Q5_K_M.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [Chupacabra-7B-v2-Q6_K.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [Chupacabra-7B-v2-Q8_0.gguf](https://huggingface.co/tensorblock/Chupacabra-7B-v2-GGUF/blob/main/Chupacabra-7B-v2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Chupacabra-7B-v2-GGUF --include "Chupacabra-7B-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Chupacabra-7B-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_ | yaneq | "2024-02-06T23:44:19Z" | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-02-06T23:44:16Z" |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_
<Gallery />
## Model description
These are yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_ LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 5
- learning_rate: 0.01
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls = - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- gradient_accumulation_steps = 3
- GPU = T4
- duration =
|
AI-Sweden-Models/gpt-sw3-356m | AI-Sweden-Models | "2024-01-29T13:20:22Z" | 3,118 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"sv",
"no",
"en",
"is",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-12-14T12:31:57Z" | ---
license: other
language:
- da
- sv
- 'no'
- en
- is
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. AI Sweden shares GPT-SW3 in a controlled pre-release with organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. This is an important step in the process of validating the model and collecting feedback on both what works well and what does not.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-356m"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No. |
blossominkyung/dqn-SpaceInvadersNoFrameskip-v4 | blossominkyung | "2023-11-09T14:41:06Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-09T14:40:24Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 648.00 +/- 342.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga blossominkyung -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga blossominkyung -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga blossominkyung
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ijik-loker/RVC_Darwin_Watterson_Japanese | ijik-loker | "2023-10-16T14:10:39Z" | 0 | 0 | null | [
"rvc",
"voice cloning",
"The Amazing World of Gumball",
"おかしなガムボール",
"Darwin Watterson",
"ダーウィン",
"Yumiko Kobayashi",
"小林由美子",
"en",
"ja",
"region:us"
] | null | "2023-10-16T11:39:08Z" | ---
language:
- en
- ja
tags:
- rvc
- voice cloning
- The Amazing World of Gumball
- おかしなガムボール
- Darwin Watterson
- ダーウィン
- Yumiko Kobayashi
- 小林由美子
---
## Model Details
Voice of Yumiko Kobayashi 小林由美子 as Darwin Watterson ダーウィン in the Japanese dub of the cartoon The Amazing World of Gumball おかしなガムボール.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [ijik-loker](https://huggingface.co/ijik-loker)
- **Model type:** [Retrieval-based Voice Conversion (RVC)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
- **Language(s):** Japanese
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Used in the popular Retrieval-based Voice Conversion WebUI via inference or real-time using [Voice Changer](https://github.com/w-okada/voice-changer).
The index file should be used alongside the model.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
#### Voice clips total time
v1 model: 02min 54s
v2 model: 13min 22s
Trained using [episode clips](https://www.youtube.com/playlist?list=PLWSC6OHatntSWjsbQWCSQn-MuwIscHL_8) uploaded by CartoonNetworkJP カートゥーン ネットワーク:
1. [The Watch](https://youtu.be/qn8M_KHPYno?si=6y02GSGaaY6zYTIa)
2. [The Void](https://youtu.be/Cxtg7LqmdkI?si=mGws0GXpV3K8yt6C)
3. [The Vegging](https://youtu.be/egUkeiy5Ujw?si=KoVwqrbduXTUdMfH)
4. [The Test](https://youtu.be/e3t0yldGmTw?si=DduDKGu1D1H39YCV)
5. [The Tape](https://youtu.be/3g6if7EhZNY?si=68lTnBIdYNH4n6fN)
6. [The Sucker](https://youtu.be/ePZSFibuZgk?si=hZc-2rtia7xL2pqt)
7. [The Stories](https://youtu.be/c4yw042zJXA?si=yXs5vjyhHvgRkAjv)
8. [The Slide](https://youtu.be/KF-gZK8859Q?si=ls8KaPAhlYB4tGGo)
9. [The Sidekick](https://youtu.be/3vfZauRDqG4?si=yDHBpTF-7pm0x3gt)
10. [The Safety](https://youtu.be/hZT9I0TVpJk?si=eF2Xs8PT0xTSw1Oe)
11. [The Puppets](https://youtu.be/TH_JMIkCWTc?si=QaW3rmEJgWC_Msdq)
12. [The Procrastinators](https://youtu.be/FGH3-NR22YI?si=t7Ux_7ccgmqbixE7)
13. [The Pest](https://youtu.be/V1De2RI2q_E?si=i-GbCoy_eUxtdJbL)
14. [The Nobody](https://youtu.be/qC7Z1QigFLA?si=sO_WwgOGcf-krHI0)
15. [The Misunderstandings](https://youtu.be/7GOmjuB0aLk?si=5cb0XSKL3V3GwFEQ)
16. [The Matchmaker](https://youtu.be/_x-Czj3G8rc?si=rwAJC58492pDUR9P)
17. [The Burden](https://youtu.be/GN5c9FUbZMk?si=zZYkWAR8Z4GT0Ev_)
18. [The Best](https://youtu.be/LN2AyPry0hI?si=tdgIUw22f2o2kTv9)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
1. Remove noise using [Ultimate Vocal Remover 5](https://github.com/Anjok07/ultimatevocalremovergui) UVR-DeNoise.
2. Extract vocals using RVC Web UI [HP5-主旋律人声vocals+其他instrumentals.pth](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/uvr5_weights/HP5-%E4%B8%BB%E6%97%8B%E5%BE%8B%E4%BA%BA%E5%A3%B0vocals%2B%E5%85%B6%E4%BB%96instrumentals.pth).
3. Remove echo and reverb using Ultimate Vocal Remover 5 UVR-DeEcho-DeReverb.
4. Manually diarise voices in [Audacity](https://www.audacityteam.org/) using labels.
5. Export multiple to .wav by labels.
6. Train using RVC
* Target Sample Rate: 48k
* Version: v2
* Total training epochs: 200
* Base model G: f0G48k.pth
* Base model D: f0D48k.pth |
ys7yoo/sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep9_ckpt | ys7yoo | "2023-09-18T06:40:01Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"base_model:finetune:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-18T06:06:14Z" | ---
base_model: ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep9_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep9_ckpt
This model is a fine-tuned version of [ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3](https://huggingface.co/ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3250
- Mse: 0.3250
- Mae: 0.4166
- R2: 0.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.2084 | 1.0 | 183 | 0.5071 | 0.5071 | 0.5306 | 0.7678 |
| 0.1515 | 2.0 | 366 | 0.3142 | 0.3142 | 0.4149 | 0.8561 |
| 0.103 | 3.0 | 549 | 0.3284 | 0.3284 | 0.4150 | 0.8496 |
| 0.0779 | 4.0 | 732 | 0.3306 | 0.3306 | 0.4184 | 0.8486 |
| 0.0597 | 5.0 | 915 | 0.3219 | 0.3219 | 0.4098 | 0.8526 |
| 0.0497 | 6.0 | 1098 | 0.3324 | 0.3324 | 0.4175 | 0.8478 |
| 0.0407 | 7.0 | 1281 | 0.3114 | 0.3114 | 0.4119 | 0.8574 |
| 0.0356 | 8.0 | 1464 | 0.3305 | 0.3305 | 0.4199 | 0.8486 |
| 0.0327 | 9.0 | 1647 | 0.3250 | 0.3250 | 0.4166 | 0.8512 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF | mradermacher | "2025-02-23T05:25:17Z" | 96 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/DoublePotato-Mistral-Nemo-13B",
"base_model:quantized:nbeerbower/DoublePotato-Mistral-Nemo-13B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-14T06:36:39Z" | ---
base_model: nbeerbower/DoublePotato-Mistral-Nemo-13B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/DoublePotato-Mistral-Nemo-13B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q2_K.gguf) | i1-Q2_K | 5.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ3_M.gguf) | i1-IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q4_0.gguf) | i1-Q4_0 | 7.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q4_1.gguf) | i1-Q4_1 | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/DoublePotato-Mistral-Nemo-13B-i1-GGUF/resolve/main/DoublePotato-Mistral-Nemo-13B.i1-Q6_K.gguf) | i1-Q6_K | 11.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MrRobotoAI/121-Q4_K_M-GGUF | MrRobotoAI | "2025-03-23T20:32:37Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/121",
"base_model:quantized:MrRobotoAI/121",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-23T20:32:12Z" | ---
base_model: MrRobotoAI/121
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/121-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/121`](https://huggingface.co/MrRobotoAI/121) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/121) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/121-Q4_K_M-GGUF --hf-file 121-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/121-Q4_K_M-GGUF --hf-file 121-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/121-Q4_K_M-GGUF --hf-file 121-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/121-Q4_K_M-GGUF --hf-file 121-q4_k_m.gguf -c 2048
```
|
Yedson54/code-search-net-tokenizer | Yedson54 | "2024-06-19T09:43:06Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-19T09:43:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Heoni/Llama-3-KoEn-8B-Aguie_ep4_proto | Heoni | "2024-05-17T05:42:27Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-17T00:27:00Z" | ---
license: cc-by-nc-nd-4.0
language:
- ko
- en
---
아수라발발타 아수라발발타
손은 눈보다 빠르다! 무슨 패를 잡고 싶니?
아수라발발타 아수라발발타
돈을 벌고 싶니?
아수라발발타 아수라발발타
부자가되고 싶니?
부자가되고 싶어?
화투하면 대한민국에 딱 세 명이야. 경상도에 짝귀, 전라도에 아귀, 기카고 전국적으로 나! 예전에 짝귀랑 아귀가 한판 붙었는데, 아귀가 짝귀의 귀를 짤라 버렸어. 기래서 짝귀야
# Aguie-chat_v0.1
<!-- Provide a quick summary of what the model is/does. -->
<!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).-->
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a continual learning version of Aguie_v0.1
### Trained Data
- 3,000,000 inst data
### License
This model is licensed under the cc-by-nc-nd-4.0. |
RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf | RichardErkhov | "2024-09-20T13:22:25Z" | 724 | 0 | null | [
"gguf",
"arxiv:2203.05482",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-20T07:58:31Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Configurable-Janus-7B - GGUF
- Model creator: https://huggingface.co/vicgalle/
- Original model: https://huggingface.co/vicgalle/Configurable-Janus-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Configurable-Janus-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Configurable-Janus-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Configurable-Janus-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Configurable-Janus-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Configurable-Janus-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Configurable-Janus-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Configurable-Janus-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Configurable-Janus-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Configurable-Janus-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Configurable-Janus-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Configurable-Janus-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Configurable-Janus-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Configurable-Janus-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Configurable-Janus-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Configurable-Janus-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Configurable-Janus-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Configurable-Janus-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Configurable-Janus-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Configurable-Janus-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Configurable-Janus-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Configurable-Janus-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [Configurable-Janus-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_Configurable-Janus-7B-gguf/blob/main/Configurable-Janus-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
base_model:
- vicgalle/Configurable-Mistral-7B
- kaist-ai/janus-dpo-7b
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [vicgalle/Configurable-Mistral-7B](https://huggingface.co/vicgalle/Configurable-Mistral-7B)
* [kaist-ai/janus-dpo-7b](https://huggingface.co/kaist-ai/janus-dpo-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: vicgalle/Configurable-Mistral-7B
parameters:
weight: 1.0
- model: kaist-ai/janus-dpo-7b
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```
|
bowilleatyou/8733610d-77ef-4499-8e73-4f9a07fba9a5 | bowilleatyou | "2025-03-26T12:16:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-26T11:57:30Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
umutarpayy/mat10_bert | umutarpayy | "2025-03-19T08:46:32Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-19T08:34:16Z" | ---
library_name: transformers
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
- generated_from_keras_callback
model-index:
- name: umutarpayy/mat10_bert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# umutarpayy/mat10_bert
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0776
- Train Accuracy: 0.9751
- Validation Loss: 0.0492
- Validation Accuracy: 0.9813
- Epoch: 11
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 3e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 6502, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 722, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 2.2692 | 0.3666 | 1.1667 | 0.6328 | 0 |
| 0.9415 | 0.6985 | 0.6866 | 0.7718 | 1 |
| 0.6443 | 0.7888 | 0.4711 | 0.8506 | 2 |
| 0.4827 | 0.8463 | 0.3124 | 0.9015 | 3 |
| 0.3682 | 0.8837 | 0.2413 | 0.9232 | 4 |
| 0.2845 | 0.9115 | 0.1868 | 0.9450 | 5 |
| 0.2158 | 0.9326 | 0.1263 | 0.9658 | 6 |
| 0.1762 | 0.9440 | 0.0866 | 0.9772 | 7 |
| 0.1431 | 0.9529 | 0.0849 | 0.9689 | 8 |
| 0.1137 | 0.9629 | 0.0652 | 0.9793 | 9 |
| 0.0907 | 0.9709 | 0.0558 | 0.9803 | 10 |
| 0.0776 | 0.9751 | 0.0492 | 0.9813 | 11 |
### Framework versions
- Transformers 4.48.3
- TensorFlow 2.18.0
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Owhslp/nous_researcher_tuning_2_61 | Owhslp | "2024-03-15T11:11:04Z" | 113 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-15T10:49:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dimi1357/LunarLander-v2 | dimi1357 | "2023-05-28T15:36:18Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-28T15:30:58Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -232.16 +/- 167.18
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
rk68/phi-1_5-finetuned-aqua-rat-5k | rk68 | "2024-03-17T14:23:08Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | "2024-03-17T14:11:16Z" | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: phi-1_5-finetuned-aqua-rat-5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-aqua-rat-5k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
latincy/la_core_web_sm | latincy | "2025-02-17T00:49:18Z" | 114 | 1 | spacy | [
"spacy",
"token-classification",
"la",
"license:mit",
"model-index",
"region:us"
] | token-classification | "2023-04-29T21:38:39Z" | ---
tags:
- spacy
- token-classification
language:
- la
license: mit
model-index:
- name: la_core_web_sm
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8282608696
- name: NER Recall
type: recall
value: 0.8708571429
- name: NER F Score
type: f_score
value: 0.8490250696
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9464849275
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9655548726
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9133882583
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9402139711
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8317671849
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7775998406
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9941943258
---
| Feature | Description |
| --- | --- |
| **Name** | `la_core_web_sm` |
| **Version** | `3.8.0` |
| **spaCy** | `>=3.8.4,<3.9.0` |
| **Default Pipeline** | `senter`, `normer`, `tok2vec`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `lookup_lemmatizer`, `ner`, `remorpher` |
| **Components** | `senter`, `normer`, `tok2vec`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `lookup_lemmatizer`, `ner`, `remorpher` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | UD_Latin-Perseus (via Gamba/Zeman 2023)<br>UD_Latin-PROIEL (via Gamba/Zeman 2023)<br>UD_Latin-ITTB (via Gamba/Zeman 2023)<br>UD_Latin-LLCT (via Gamba/Zeman 2023<br>UD_Latin-UDante (via Gamba/Zeman 2023)<br>CIRCSE/LASLA: LASLA Corpus<br>UD_Latin-CIRCSE<br>LatinCy Assets |
| **License** | `MIT` |
| **Author** | [Patrick J. Burns; with Nora Bernhardt [ner], Tim Geelhaar [tagger, morphologizer, parser, ner], Vincent Koch [ner]](https://diyclassics.github.io/) |
### Label Scheme
<details>
<summary>View label scheme (855 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `PART`, `SCONJ`, `X`, `_`, `adjective`, `adverb`, `conjunction`, `determiner`, `interjection`, `noun`, `number`, `particle`, `preposition`, `pronoun`, `proper_noun`, `punc`, `unknown`, `verb` |
| **`morphologizer`** | `POS=ADV`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=AUX\|Tense=Pres\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=PUNCT`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PART`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Gender=Neut\|POS=NUM`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=DET`, `POS=SCONJ`, `POS=CCONJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=1`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Neut\|Mood=Ger\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=PRON`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Case=Dat\|POS=PRON\|Person=3`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Acc\|POS=PRON`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET`, `POS=NUM`, `POS=PROPN`, `POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Nom\|POS=PRON`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Gen\|POS=PRON\|Person=3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Abl\|POS=PRON\|Person=3`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=DET`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `POS=X`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Mood=Ger\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Masc\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Perf\|VerbForm=Fin`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=PRON`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON`, `POS=ADV\|VerbForm=Fin`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Case=Gen\|Gender=Neut\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Neut\|Mood=Ger\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Mood=Ger\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|VerbForm=Part`, `POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Abl\|POS=PRON`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Perf\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Perf\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Abl\|Gender=Neut\|Mood=Ger\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Abl\|Gender=Masc\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Case=Abl\|Gender=Fem\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=FutPerf\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Perf\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Mood=Ger\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NUM`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Abl\|Gender=Neut\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NUM`, `POS=SCONJ\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `POS=CCONJ\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Gender=Fem\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Perf\|VerbForm=Fin`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=AUX\|VerbForm=Part`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Abl\|Gender=Fem\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Inf`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Neut\|Mood=Ger\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NUM`, `POS=VERB`, `Gender=Masc\|Number=Sing\|POS=DET`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=AUX\|VerbForm=Part`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=FutPerf\|VerbForm=Fin`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=2`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Gender=Fem\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Gender=Masc\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Dat\|Gender=Masc\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Inf\|Voice=Act`, `Case=Abl\|Gender=Neut\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=AUX\|VerbForm=Part`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=AUX\|VerbForm=Part`, `Gender=Fem\|POS=PROPN`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Mood=Ger\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=AUX\|VerbForm=Part`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=FutPerf\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|POS=PRON`, `Gender=Masc\|POS=DET`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Gender=Masc\|POS=PROPN`, `Gender=Masc\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Gen\|Gender=Fem\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=DET`, `Gender=Neut\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Number=Plur\|POS=PRON`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=AUX\|VerbForm=Part`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Inf`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Perf\|VerbForm=Fin`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2`, `POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=FutPerf\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Perf\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Perf\|VerbForm=Fin`, `POS=VERB\|Tense=Perf\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=FutPerf\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=2`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Masc\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=2`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `POS=AUX\|Tense=Perf\|VerbForm=Inf`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=2`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Perf\|VerbForm=Fin\|Voice=Act`, `Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `POS=INTJ`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Conv\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Mood=Gdv\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `_`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Gender=Neut\|POS=PROPN`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Number=Sing\|POS=PRON\|Person=2`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Masc\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Perf\|VerbForm=Fin`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Acc\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=NUM`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Perf\|VerbForm=Fin`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Perf\|VerbForm=Fin`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pqp\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pqp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Gender=Fem\|Number=Plur\|POS=DET`, `POS=VERB\|Tense=Perf\|VerbForm=Inf\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=VERB\|Tense=Pres\|VerbForm=Inf`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=X`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Conv\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=FutPerf\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Mood=Gdv\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Perf\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Abl\|Gender=Neut\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Gender=Neut\|Number=Sing\|POS=PROPN`, `Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Perf\|VerbForm=Part`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Perf\|VerbForm=Part`, `Case=Abl\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=PRON`, `POS=ADV\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Perf\|VerbForm=Part`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Perf\|VerbForm=Part`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=NOUN\|VerbForm=Vnoun`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Number=Plur\|POS=PROPN`, `Case=Abl\|Mood=Ger\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=PRON`, `Number=Plur\|POS=NUM`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Gender=Neut\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=ADV`, `Case=Gen\|Mood=Ger\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Voc\|Number=Plur\|POS=PRON\|Person=2`, `Case=Voc\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Abl\|POS=VERB\|VerbForm=Sup`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|POS=VERB\|Tense=Perf\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Number=Plur\|POS=NOUN`, `Case=Loc\|POS=ADV`, `Case=Nom\|Number=Plur\|POS=DET`, `Case=Abl\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Inf`, `Case=Abl\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advcl:abs`, `advcl:cmp`, `advcl:pred`, `advmod`, `advmod:emph`, `advmod:lmod`, `advmod:neg`, `advmod:tmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `ccomp:reported`, `conj`, `conj:expl`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `dislocated`, `dislocated:obj`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `mark`, `nmod`, `nsubj`, `nsubj:outer`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `obl:arg`, `obl:lmod`, `obl:tmod`, `orphan`, `parataxis`, `punct`, `reparandum`, `vocative`, `xcomp` |
| **`ner`** | `LOC`, `NORP`, `PERSON` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 94.65 |
| `ENTS_F` | 84.90 |
| `ENTS_P` | 82.83 |
| `ENTS_R` | 87.09 |
| `TOK2VEC_LOSS` | 13685660.57 |
| `NER_LOSS` | 8967.36 |
| `POS_ACC` | 96.56 |
| `MORPH_ACC` | 91.34 |
| `LEMMA_ACC` | 94.02 |
| `DEP_UAS` | 83.18 |
| `DEP_LAS` | 77.76 |
| `SENTS_P` | 99.34 |
| `SENTS_R` | 99.50 |
| `SENTS_F` | 99.42 |
| `TAGGER_LOSS` | 953475.48 |
| `MORPHOLOGIZER_LOSS` | 1991955.61 |
| `TRAINABLE_LEMMATIZER_LOSS` | 789483.41 |
| `PARSER_LOSS` | 6304962.11 | |
tsss1/qwen0.5-trlfinal_vpn_domainsfinal-Q8_0-GGUF | tsss1 | "2025-03-06T14:31:41Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:tsss1/qwen0.5-trlfinal_vpn_domainsfinal",
"base_model:quantized:tsss1/qwen0.5-trlfinal_vpn_domainsfinal",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-06T14:31:36Z" | ---
base_model: tsss1/qwen0.5-trlfinal_vpn_domainsfinal
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# tsss1/qwen0.5-trlfinal_vpn_domainsfinal-Q8_0-GGUF
This model was converted to GGUF format from [`tsss1/qwen0.5-trlfinal_vpn_domainsfinal`](https://huggingface.co/tsss1/qwen0.5-trlfinal_vpn_domainsfinal) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/tsss1/qwen0.5-trlfinal_vpn_domainsfinal) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tsss1/qwen0.5-trlfinal_vpn_domainsfinal-Q8_0-GGUF --hf-file qwen0.5-trlfinal_vpn_domainsfinal-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tsss1/qwen0.5-trlfinal_vpn_domainsfinal-Q8_0-GGUF --hf-file qwen0.5-trlfinal_vpn_domainsfinal-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tsss1/qwen0.5-trlfinal_vpn_domainsfinal-Q8_0-GGUF --hf-file qwen0.5-trlfinal_vpn_domainsfinal-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tsss1/qwen0.5-trlfinal_vpn_domainsfinal-Q8_0-GGUF --hf-file qwen0.5-trlfinal_vpn_domainsfinal-q8_0.gguf -c 2048
```
|
a2-z-jankari/wATCH.a2-z-jankari.viral.video.original | a2-z-jankari | "2025-03-16T04:32:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-16T04:29:10Z" | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://tvnowgo.top/viral-tv/?V=a2-z-jankari)
[🔴 CLICK HERE 🌐==►► Download Now)](https://tvnowgo.top/viral-tv/?V=a2-z-jankari)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://tvnowgo.top/viral-tv/?V=a2-z-jankari) |
lesso12/9f737133-4ed6-4a3f-be56-ce9ba45335ce | lesso12 | "2025-02-12T15:37:51Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-12T14:42:08Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9f737133-4ed6-4a3f-be56-ce9ba45335ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 9f737133-4ed6-4a3f-be56-ce9ba45335ce
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000212
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 9.3290 |
| 6.5572 | 0.0020 | 50 | 7.0186 |
| 5.0754 | 0.0040 | 100 | 5.4123 |
| 4.841 | 0.0060 | 150 | 4.8634 |
| 4.3361 | 0.0080 | 200 | 4.5451 |
| 3.7875 | 0.0100 | 250 | 4.2500 |
| 3.8409 | 0.0120 | 300 | 4.0224 |
| 3.7794 | 0.0140 | 350 | 3.8737 |
| 3.7333 | 0.0160 | 400 | 3.7864 |
| 3.827 | 0.0180 | 450 | 3.7402 |
| 2.9285 | 0.0200 | 500 | 3.7333 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
YUN967/gpt2-wikitext2 | YUN967 | "2023-05-31T08:22:26Z" | 165 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-31T07:34:57Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5572 | 1.0 | 2249 | 6.4714 |
| 6.19 | 2.0 | 4498 | 6.1999 |
| 6.0148 | 3.0 | 6747 | 6.1122 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NishithP2004/bda_630de8bf_mistral-7b-v0.3-bnb-4bit | NishithP2004 | "2025-03-15T20:42:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-15T20:42:40Z" | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** NishithP2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nlightcho/stable-diffusion-v1-5 | nlightcho | "2023-03-02T16:08:29Z" | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-03-02T16:07:52Z" | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* |
Wei-Meng/autotrain-jeiqk-7jghf | Wei-Meng | "2024-02-19T11:28:24Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-jeiqk-7jghf/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-02-19T11:28:10Z" |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-jeiqk-7jghf/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.7337517738342285
f1_macro: 0.5333333333333333
f1_micro: 0.6666666666666666
f1_weighted: 0.5333333333333333
precision_macro: 0.4444444444444444
precision_micro: 0.6666666666666666
precision_weighted: 0.4444444444444444
recall_macro: 0.6666666666666666
recall_micro: 0.6666666666666666
recall_weighted: 0.6666666666666666
accuracy: 0.6666666666666666
|
mnemic/P14n03l3g4nt3b0n3XL-SDXL-LoRA | mnemic | "2024-06-17T22:45:28Z" | 0 | 0 | null | [
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:gpl-3.0",
"region:us"
] | null | "2024-06-17T16:15:23Z" | ---
license: gpl-3.0
base_model: stabilityai/stable-diffusion-xl-base-1.0
trained_words: P14n03l3g4nt3b0n3
---
# P14n03l3g4nt3b0n3XL - SDXL - LoRA
[CivitAI Page](https://civitai.com/models/349112)
## Trigger Words
```P14n03l3g4nt3b0n3```

A beautiful ebony and ivory style.
|
Sophie-Rain-Viral/Full.Video.Sophie.Rain.instagram.viral.video.Link.Original | Sophie-Rain-Viral | "2025-03-04T22:10:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-04T22:08:41Z" | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](http://tvnowgo.top/viral-tv/?V=Sophie-Rain)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)](http://tvnowgo.top/viral-tv/?V=Sophie-Rain)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](http://tvnowgo.top/viral-tv/?V=Sophie-Rain) |
mradermacher/Intellecta-GGUF | mradermacher | "2025-01-17T17:42:09Z" | 634 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:BAAI/Infinity-Instruct",
"dataset:allenai/WildChat-1M",
"dataset:lavita/ChatDoctor-HealthCareMagic-100k",
"dataset:zjunlp/Mol-Instructions",
"dataset:garage-bAInd/Open-Platypus",
"base_model:kssrikar4/Intellecta",
"base_model:quantized:kssrikar4/Intellecta",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | "2025-01-17T12:53:35Z" | ---
base_model: kssrikar4/Intellecta
datasets:
- fka/awesome-chatgpt-prompts
- BAAI/Infinity-Instruct
- allenai/WildChat-1M
- lavita/ChatDoctor-HealthCareMagic-100k
- zjunlp/Mol-Instructions
- garage-bAInd/Open-Platypus
language:
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kssrikar4/Intellecta
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Intellecta-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Intellecta-GGUF/resolve/main/Intellecta.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sigridjineth/ko-embedding-v1-preview | sigridjineth | "2024-08-25T11:37:15Z" | 5 | 0 | null | [
"safetensors",
"new",
"custom_code",
"region:us"
] | null | "2024-08-25T05:27:17Z" | finetuned to korean with wikipedia dataset.
batch size 32768 using gc trainer.
base model is gte-multilingual-base. |
TanvirMungekar/MergedIntentPhi | TanvirMungekar | "2025-01-23T06:53:08Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-23T06:39:19Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
swtb/XLM-RoBERTa-Base-Conll2003-English-NER-Finetune-FP16-BinaryClass-WeightedLoss | swtb | "2024-06-01T21:59:47Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-01T21:59:10Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLM-RoBERTa-Base-Conll2003-English-NER-Finetune-FP16-BinaryClass-WeightedLoss
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9526306589757035
- name: Recall
type: recall
value: 0.964943342776204
- name: F1
type: f1
value: 0.9587474711935965
- name: Accuracy
type: accuracy
value: 0.9901367502961128
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-Base-Conll2003-English-NER-Finetune-FP16-BinaryClass-WeightedLoss
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1188
- Precision: 0.9526
- Recall: 0.9649
- F1: 0.9587
- Accuracy: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2739 | 0.3333 | 1441 | 0.0632 | 0.9412 | 0.9373 | 0.9392 | 0.9863 |
| 0.0329 | 0.6667 | 2882 | 0.0572 | 0.9435 | 0.9347 | 0.9391 | 0.9865 |
| 0.024 | 1.0 | 4323 | 0.0679 | 0.9433 | 0.9536 | 0.9484 | 0.9882 |
| 0.0181 | 1.3333 | 5764 | 0.0652 | 0.9458 | 0.9618 | 0.9537 | 0.9897 |
| 0.0187 | 1.6667 | 7205 | 0.0625 | 0.9531 | 0.9492 | 0.9511 | 0.9895 |
| 0.0176 | 2.0 | 8646 | 0.0685 | 0.9488 | 0.9573 | 0.9530 | 0.9896 |
| 0.0108 | 2.3333 | 10087 | 0.0931 | 0.9470 | 0.9625 | 0.9547 | 0.9897 |
| 0.0117 | 2.6667 | 11528 | 0.0808 | 0.9489 | 0.9632 | 0.9560 | 0.9900 |
| 0.0107 | 3.0 | 12969 | 0.0672 | 0.9531 | 0.9602 | 0.9566 | 0.9908 |
| 0.0076 | 3.3333 | 14410 | 0.0973 | 0.9470 | 0.9587 | 0.9528 | 0.9897 |
| 0.0085 | 3.6667 | 15851 | 0.0741 | 0.9574 | 0.9549 | 0.9561 | 0.9906 |
| 0.0092 | 4.0 | 17292 | 0.0807 | 0.9492 | 0.9621 | 0.9556 | 0.9901 |
| 0.0049 | 4.3333 | 18733 | 0.0886 | 0.9527 | 0.9623 | 0.9575 | 0.9906 |
| 0.0058 | 4.6667 | 20174 | 0.0871 | 0.9516 | 0.9639 | 0.9577 | 0.9904 |
| 0.0047 | 5.0 | 21615 | 0.0928 | 0.9541 | 0.9610 | 0.9576 | 0.9903 |
| 0.0041 | 5.3333 | 23056 | 0.1145 | 0.9491 | 0.9667 | 0.9578 | 0.9899 |
| 0.0048 | 5.6667 | 24497 | 0.0854 | 0.9554 | 0.9623 | 0.9588 | 0.9907 |
| 0.0032 | 6.0 | 25938 | 0.1107 | 0.9488 | 0.9651 | 0.9569 | 0.9899 |
| 0.003 | 6.3333 | 27379 | 0.1038 | 0.9524 | 0.9674 | 0.9599 | 0.9907 |
| 0.0032 | 6.6667 | 28820 | 0.1038 | 0.9533 | 0.9651 | 0.9592 | 0.9904 |
| 0.0034 | 7.0 | 30261 | 0.1038 | 0.9534 | 0.9667 | 0.9600 | 0.9906 |
| 0.0025 | 7.3333 | 31702 | 0.1103 | 0.9528 | 0.9619 | 0.9574 | 0.9899 |
| 0.003 | 7.6667 | 33143 | 0.1177 | 0.9506 | 0.9644 | 0.9575 | 0.9899 |
| 0.0022 | 8.0 | 34584 | 0.1151 | 0.9511 | 0.9633 | 0.9572 | 0.9900 |
| 0.0016 | 8.3333 | 36025 | 0.1141 | 0.9528 | 0.9651 | 0.9589 | 0.9904 |
| 0.0025 | 8.6667 | 37466 | 0.1090 | 0.9550 | 0.9626 | 0.9588 | 0.9905 |
| 0.0024 | 9.0 | 38907 | 0.1115 | 0.9546 | 0.9653 | 0.9599 | 0.9906 |
| 0.002 | 9.3333 | 40348 | 0.1148 | 0.9536 | 0.9639 | 0.9587 | 0.9903 |
| 0.0014 | 9.6667 | 41789 | 0.1201 | 0.9522 | 0.9655 | 0.9588 | 0.9902 |
| 0.0015 | 10.0 | 43230 | 0.1188 | 0.9526 | 0.9649 | 0.9587 | 0.9901 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
techme/gpt_finetuned-oee | techme | "2024-07-24T09:42:20Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-24T09:36:55Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GreenBitAI/Phi-3-mini-128k-instruct-layer-mix-bpw-2.2 | GreenBitAI | "2024-05-06T19:53:11Z" | 143 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-24T07:48:04Z" | ---
license: apache-2.0
---
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### zero-shot evaluation
| **Repository (Phi Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:---------------------------------------------|:------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:---------:|:--------:|:-----------:|:-----------:|:-----------:|:-------:|
| `Phi-3-mini-128k-instruct-layer-mix-bpw-2.2` | 0.510 | 0.270 | 0.706 | 0.648 | 0.479 | 0.411 | 0.736 | 0.783 | 0.381 | 0.393 | 0.38 | 0.399 | 0.536 |
| `Phi-3-mini-128k-instruct-layer-mix-bpw-2.5` | 0.514 | 0.290 | 0.719 | 0.656 | 0.488 | 0.401 | 0.750 | 0.778 | 0.401 | 0.392 | 0.410 | 0.407 | 0.493 |
| `Phi-3-mini-128k-instruct-layer-mix-bpw-3.0` | 0.548 | 0.318 | 0.761 | 0.663 | 0.519 | 0.453 | 0.777 | 0.798 | 0.393 | 0.473 | 0.404 | 0.442 | 0.579 |
| `Phi-3-mini-128k-instruct-layer-mix-bpw-4.0` | 0.582 | 0.346 | 0.779 | 0.708 | 0.582 | 0.495 | 0.787 | 0.840 | 0.412 | 0.529 | 0.459 | 0.448 | 0.606 |
| `Phi-3-mini-128k-instruct ` | 0.586 | 0.342 | 0.785 | 0.731 | 0.596 | 0.512 | 0.782 | 0.851 | 0.401 | 0.547 | 0.464 | 0.432 | 0.594 |
### 5-shot evaluation
| **Repository (Phi Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:---------------------------------------------|:------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:---------:|:--------:|:-----------:|:-----------:|:-----------:|:-------:|
| `Phi-3-mini-128k-instruct-layer-mix-bpw-2.2` | 0.534 | 0.302 | 0.738 | 0.659 | 0.487/0.636 | 0.438 | 0.744 | 0.793 | 0.408 | 0.421 | 0.404 | 0.439 | 0.583 |
| `Phi-3-mini-128k-instruct-layer-mix-bpw-2.5` | 0.543 | 0.310 | 0.771 | 0.671 | 0.501/0.657 | 0.441 | 0.763 | 0.799 | 0.405 | 0.453 | 0.427 | 0.443 | 0.534 |
| `Phi-3-mini-128k-instruct-layer-mix-bpw-3.0` | 0.563 | 0.346 | 0.796 | 0.687 | 0.528/0.694 | 0.500 | 0.782 | 0.809 | 0.410 | 0.473 | 0.394 | 0.474 | 0.565 |
| `Phi-3-mini-128k-instruct-layer-mix-bpw-4.0` | 0.602 | 0.374 | 0.817 | 0.725 | 0.598/0.768 | 0.542 | 0.766 | 0.864 | 0.428 | 0.523 | 0.456 | 0.497 | 0.658 |
| `Phi-3-mini-128k-instruct ` | 0.608 | 0.408 | 0.825 | 0.725 | 0.608/0.781 | 0.534 | 0.768 | 0.866 | | 0.538 | 0.483 | 0.515 | 0.627 |
|
LamaAldakhil/SL-CvT | LamaAldakhil | "2023-05-18T20:27:17Z" | 199 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"cvt",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-05-18T12:55:22Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
- accuracy
model-index:
- name: SL-CvT
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 0.9297928229609359
- name: Accuracy
type: accuracy
value: 0.9316640584246219
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SL-CvT
This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3430
- F1: 0.9298
- Roc Auc: 0.9777
- Accuracy: 0.9317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 1.2379 | 1.0 | 60 | 1.0716 | 0.6422 | 0.7323 | 0.7246 |
| 1.0186 | 2.0 | 120 | 0.8477 | 0.6425 | 0.7879 | 0.7293 |
| 0.9433 | 3.0 | 180 | 0.7473 | 0.7060 | 0.8454 | 0.7538 |
| 0.8644 | 4.0 | 240 | 0.6831 | 0.7188 | 0.8696 | 0.7663 |
| 0.7985 | 5.0 | 300 | 0.6420 | 0.7409 | 0.8943 | 0.7799 |
| 0.7322 | 6.0 | 360 | 0.5713 | 0.7886 | 0.9196 | 0.8101 |
| 0.725 | 7.0 | 420 | 0.5311 | 0.7989 | 0.9324 | 0.8190 |
| 0.6529 | 8.0 | 480 | 0.5246 | 0.7852 | 0.9404 | 0.8117 |
| 0.6224 | 9.0 | 540 | 0.4598 | 0.8282 | 0.9517 | 0.8440 |
| 0.6315 | 10.0 | 600 | 0.4363 | 0.8457 | 0.9585 | 0.8529 |
| 0.5651 | 11.0 | 660 | 0.4437 | 0.8323 | 0.9564 | 0.8503 |
| 0.574 | 12.0 | 720 | 0.4003 | 0.8531 | 0.9617 | 0.8638 |
| 0.5269 | 13.0 | 780 | 0.3901 | 0.8676 | 0.9671 | 0.8722 |
| 0.5138 | 14.0 | 840 | 0.3984 | 0.8607 | 0.9685 | 0.8732 |
| 0.4839 | 15.0 | 900 | 0.3763 | 0.8683 | 0.9701 | 0.8769 |
| 0.463 | 16.0 | 960 | 0.3398 | 0.8837 | 0.9718 | 0.8894 |
| 0.4767 | 17.0 | 1020 | 0.3293 | 0.8846 | 0.9738 | 0.8915 |
| 0.4985 | 18.0 | 1080 | 0.3350 | 0.8852 | 0.9763 | 0.8863 |
| 0.4657 | 19.0 | 1140 | 0.3369 | 0.8872 | 0.9746 | 0.8951 |
| 0.4514 | 20.0 | 1200 | 0.3213 | 0.8880 | 0.9750 | 0.8925 |
| 0.4207 | 21.0 | 1260 | 0.3175 | 0.8943 | 0.9771 | 0.8978 |
| 0.4522 | 22.0 | 1320 | 0.3229 | 0.8970 | 0.9767 | 0.8983 |
| 0.4328 | 23.0 | 1380 | 0.3121 | 0.8948 | 0.9791 | 0.8978 |
| 0.3942 | 24.0 | 1440 | 0.3111 | 0.8993 | 0.9765 | 0.9030 |
| 0.4414 | 25.0 | 1500 | 0.3062 | 0.9032 | 0.9763 | 0.9061 |
| 0.3608 | 26.0 | 1560 | 0.3099 | 0.8997 | 0.9787 | 0.9014 |
| 0.3729 | 27.0 | 1620 | 0.3050 | 0.9029 | 0.9783 | 0.9082 |
| 0.393 | 28.0 | 1680 | 0.2970 | 0.9090 | 0.9797 | 0.9108 |
| 0.402 | 29.0 | 1740 | 0.2986 | 0.9087 | 0.9793 | 0.9113 |
| 0.3697 | 30.0 | 1800 | 0.3384 | 0.8968 | 0.9769 | 0.9025 |
| 0.3502 | 31.0 | 1860 | 0.3035 | 0.9058 | 0.9789 | 0.9103 |
| 0.3653 | 32.0 | 1920 | 0.3127 | 0.9024 | 0.9788 | 0.9025 |
| 0.3898 | 33.0 | 1980 | 0.3222 | 0.9050 | 0.9778 | 0.9061 |
| 0.317 | 34.0 | 2040 | 0.3013 | 0.9124 | 0.9798 | 0.9139 |
| 0.3166 | 35.0 | 2100 | 0.3185 | 0.9095 | 0.9775 | 0.9134 |
| 0.3771 | 36.0 | 2160 | 0.3067 | 0.9049 | 0.9782 | 0.9066 |
| 0.3487 | 37.0 | 2220 | 0.2948 | 0.9118 | 0.9801 | 0.9134 |
| 0.3202 | 38.0 | 2280 | 0.2916 | 0.9168 | 0.9788 | 0.9186 |
| 0.3163 | 39.0 | 2340 | 0.3149 | 0.9141 | 0.9777 | 0.9155 |
| 0.3605 | 40.0 | 2400 | 0.2964 | 0.9192 | 0.9797 | 0.9207 |
| 0.3636 | 41.0 | 2460 | 0.3142 | 0.9111 | 0.9810 | 0.9134 |
| 0.3454 | 42.0 | 2520 | 0.3133 | 0.9111 | 0.9792 | 0.9113 |
| 0.3561 | 43.0 | 2580 | 0.3090 | 0.9073 | 0.9804 | 0.9077 |
| 0.3136 | 44.0 | 2640 | 0.3236 | 0.9144 | 0.9782 | 0.9176 |
| 0.3529 | 45.0 | 2700 | 0.3054 | 0.9175 | 0.9800 | 0.9202 |
| 0.2987 | 46.0 | 2760 | 0.2944 | 0.9222 | 0.9802 | 0.9233 |
| 0.2966 | 47.0 | 2820 | 0.3215 | 0.9201 | 0.9786 | 0.9233 |
| 0.3203 | 48.0 | 2880 | 0.3150 | 0.9219 | 0.9797 | 0.9244 |
| 0.2821 | 49.0 | 2940 | 0.3072 | 0.9273 | 0.9800 | 0.9291 |
| 0.2852 | 50.0 | 3000 | 0.3265 | 0.9155 | 0.9792 | 0.9176 |
| 0.3544 | 51.0 | 3060 | 0.3175 | 0.9150 | 0.9802 | 0.9150 |
| 0.3327 | 52.0 | 3120 | 0.3134 | 0.9222 | 0.9802 | 0.9244 |
| 0.2877 | 53.0 | 3180 | 0.3222 | 0.9154 | 0.9805 | 0.9165 |
| 0.3089 | 54.0 | 3240 | 0.3045 | 0.9248 | 0.9811 | 0.9259 |
| 0.2904 | 55.0 | 3300 | 0.3301 | 0.9175 | 0.9787 | 0.9186 |
| 0.2821 | 56.0 | 3360 | 0.3069 | 0.9206 | 0.9810 | 0.9218 |
| 0.321 | 57.0 | 3420 | 0.3209 | 0.9254 | 0.9800 | 0.9270 |
| 0.2995 | 58.0 | 3480 | 0.3281 | 0.9202 | 0.9802 | 0.9233 |
| 0.2683 | 59.0 | 3540 | 0.3263 | 0.9174 | 0.9802 | 0.9202 |
| 0.3021 | 60.0 | 3600 | 0.3484 | 0.9170 | 0.9788 | 0.9186 |
| 0.3262 | 61.0 | 3660 | 0.3270 | 0.9151 | 0.9807 | 0.9165 |
| 0.2329 | 62.0 | 3720 | 0.3280 | 0.9211 | 0.9807 | 0.9233 |
| 0.2935 | 63.0 | 3780 | 0.3296 | 0.9244 | 0.9807 | 0.9264 |
| 0.2856 | 64.0 | 3840 | 0.3323 | 0.9209 | 0.9811 | 0.9218 |
| 0.2829 | 65.0 | 3900 | 0.3390 | 0.9200 | 0.9802 | 0.9218 |
| 0.3044 | 66.0 | 3960 | 0.3324 | 0.9215 | 0.9799 | 0.9228 |
| 0.2767 | 67.0 | 4020 | 0.3496 | 0.9150 | 0.9778 | 0.9160 |
| 0.2936 | 68.0 | 4080 | 0.3378 | 0.9257 | 0.9790 | 0.9275 |
| 0.2884 | 69.0 | 4140 | 0.3493 | 0.9227 | 0.9790 | 0.9249 |
| 0.2906 | 70.0 | 4200 | 0.3408 | 0.9259 | 0.9794 | 0.9275 |
| 0.2542 | 71.0 | 4260 | 0.3559 | 0.9233 | 0.9769 | 0.9249 |
| 0.2557 | 72.0 | 4320 | 0.3481 | 0.9237 | 0.9779 | 0.9254 |
| 0.2266 | 73.0 | 4380 | 0.3518 | 0.9208 | 0.9781 | 0.9223 |
| 0.2771 | 74.0 | 4440 | 0.3544 | 0.9231 | 0.9776 | 0.9254 |
| 0.2747 | 75.0 | 4500 | 0.3469 | 0.9270 | 0.9780 | 0.9285 |
| 0.2443 | 76.0 | 4560 | 0.3513 | 0.9216 | 0.9767 | 0.9233 |
| 0.2859 | 77.0 | 4620 | 0.3456 | 0.9234 | 0.9771 | 0.9254 |
| 0.2677 | 78.0 | 4680 | 0.3474 | 0.9239 | 0.9780 | 0.9254 |
| 0.2492 | 79.0 | 4740 | 0.3513 | 0.9235 | 0.9778 | 0.9254 |
| 0.2532 | 80.0 | 4800 | 0.3524 | 0.9210 | 0.9773 | 0.9233 |
| 0.2646 | 81.0 | 4860 | 0.3529 | 0.9240 | 0.9784 | 0.9238 |
| 0.2842 | 82.0 | 4920 | 0.3433 | 0.9260 | 0.9777 | 0.9280 |
| 0.2872 | 83.0 | 4980 | 0.3584 | 0.9272 | 0.9771 | 0.9285 |
| 0.2678 | 84.0 | 5040 | 0.3430 | 0.9298 | 0.9777 | 0.9317 |
| 0.2705 | 85.0 | 5100 | 0.3534 | 0.9268 | 0.9777 | 0.9291 |
| 0.2605 | 86.0 | 5160 | 0.3574 | 0.9272 | 0.9777 | 0.9296 |
| 0.2572 | 87.0 | 5220 | 0.3426 | 0.9273 | 0.9781 | 0.9291 |
| 0.2646 | 88.0 | 5280 | 0.3472 | 0.9234 | 0.9789 | 0.9244 |
| 0.2831 | 89.0 | 5340 | 0.3433 | 0.9272 | 0.9779 | 0.9291 |
| 0.277 | 90.0 | 5400 | 0.3441 | 0.9263 | 0.9789 | 0.9280 |
| 0.2584 | 91.0 | 5460 | 0.3432 | 0.9236 | 0.9788 | 0.9249 |
| 0.2703 | 92.0 | 5520 | 0.3409 | 0.9248 | 0.9789 | 0.9259 |
| 0.2811 | 93.0 | 5580 | 0.3449 | 0.9215 | 0.9795 | 0.9228 |
| 0.2786 | 94.0 | 5640 | 0.3465 | 0.9260 | 0.9789 | 0.9280 |
| 0.267 | 95.0 | 5700 | 0.3472 | 0.9260 | 0.9791 | 0.9275 |
| 0.2695 | 96.0 | 5760 | 0.3500 | 0.9268 | 0.9786 | 0.9285 |
| 0.279 | 97.0 | 5820 | 0.3582 | 0.9249 | 0.9782 | 0.9270 |
| 0.2774 | 98.0 | 5880 | 0.3486 | 0.9251 | 0.9790 | 0.9270 |
| 0.2512 | 99.0 | 5940 | 0.3514 | 0.9287 | 0.9786 | 0.9306 |
| 0.2218 | 100.0 | 6000 | 0.3482 | 0.9269 | 0.9789 | 0.9285 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ACIDE/User-VLM-10B-base | ACIDE | "2025-02-21T01:34:09Z" | 442 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"robotics",
"en",
"dataset:ACIDE/user-vlm-pt",
"base_model:google/paligemma2-10b-ft-docci-448",
"base_model:finetune:google/paligemma2-10b-ft-docci-448",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-12-23T14:12:46Z" | ---
library_name: transformers
tags:
- robotics
license: mit
datasets:
- ACIDE/user-vlm-pt
language:
- en
base_model:
- google/paligemma2-10b-ft-docci-448
pipeline_tag: image-text-to-text
---
# User-VLM 360°

## Overview
**User-VLM 360°** is a series of personalized Vision-Language Models (VLMs) designed for social human-robot interactions. The model introduces **User-aware tuning**, addressing the **semantic gap** that arises from the misalignment between user queries and the observed scene as captured by a robot's camera. Unlike traditional instruction tuning, which introduces latency and reduces performance, **User-VLM 360°** enables **real-time, robust adaptation** in dynamic robotic environments by inherently aligning cross-modal user representations.
This model allows for **customization of open-weight VLMs** to produce **personalized responses** based on demographic attributes such as age, gender, emotion, and ethnicity while maintaining ethical and safety considerations.
## Training Details
**Base Model:** User-VLM 360° is built on **PaliGemma 2**, which consists of a **SigLIP vision encoder** and **Gemma 2 as the language model**.

### Fine-tuning Process:
1. **Base Model Tuning:**
- Tuned the **MLP layer** to provide **user and scene descriptions** over **1 epoch**.
2. **Instruction Model Tuning:**
- Instruction-tuned the **base model** using **personalized, user-specific Q&A datasets**.
- Used **Sparse Mixture of LoRA Experts (MoLE)** (3 LoRA modules, rank=16, alpha=32, one chosen) and a standalone **LoRA (rank=16, alpha=32)** over **2 epochs**.
3. **Bias Mitigation:**
- Applied **Direct Preference Optimization (DPO)** over **1 epoch** using **LoRA (rank=16, alpha=32)**.
## Model Usage
### Example Code:
```python
# The base model is not instruction-tuned and therefore is not suitable for use in a conversational mode.
from transformers import PaliGemmaProcessor, PaliGemmaForConditionalGeneration
import torch
model_id = "ACIDE/User-VLM-10B-base"
processor = PaliGemmaProcessor.from_pretrained(model_id)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16).to(device)
def generate_description(image, model, processor):
prompt = "<image> "
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
return decoded
# Example usage
from transformers.image_utils import load_image
url = "https://media.istockphoto.com/id/1282695693/photo/little-boy-sitting-on-chair-at-the-table.jpg"
image = load_image(url)
description = generate_description(image, model, processor)
print(description)
```
## Ethical Considerations & Limitations
- **Research-Only Use:** This model is intended strictly for **research purposes** and should not be deployed in real-world applications without further ethical validation.
- **Demographic Personalization:** While the model can adapt responses based on user attributes, **care must be taken to prevent bias and discrimination**.
- **No Liability:** The authors **do not accept any liability** regarding the use of this model. Responsibility for ethical and appropriate use remains with the users.
## Citation
If you use this model in your research, please cite the following papers:
```bibtex
@article{rahimi2025user,
title={User-VLM: LLM Contextualization with Multimodal Pre-trained User Models},
author={Rahimi, Hamed and Abrini, Mouad and Khoramshahi, Mahdi and Chetouani, Mohamed},
year={2025}
}
@article{rahimi2025user,
title={User-VLM 360°: Personalized Vision Language Models with User-aware Tuning for Social Human Robot Interactions},
author={Rahimi, Hamed and Bhaj, Adil, Abrini, Mouad, Khoramshahi, Mahdi, Ghogho, Mounir, and Chetouani, Mohamed},
year={2025}
}
```
## License
This model is licensed under the **MIT License**.
## Contact
For any questions or issues regarding the model, please open an issue on the repository or contact the maintainers directly. |
mradermacher/NOVA-3B-V4-GGUF | mradermacher | "2025-01-07T03:38:26Z" | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:VAIBHAV22334455/NOVA-3B-V4",
"base_model:quantized:VAIBHAV22334455/NOVA-3B-V4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-07T02:56:27Z" | ---
base_model: VAIBHAV22334455/NOVA-3B-V4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/VAIBHAV22334455/NOVA-3B-V4
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q2_K.gguf) | Q2_K | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q3_K_S.gguf) | Q3_K_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q3_K_M.gguf) | Q3_K_M | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q3_K_L.gguf) | Q3_K_L | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.IQ4_XS.gguf) | IQ4_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q4_K_S.gguf) | Q4_K_S | 1.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q4_K_M.gguf) | Q4_K_M | 1.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q5_K_S.gguf) | Q5_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q5_K_M.gguf) | Q5_K_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q6_K.gguf) | Q6_K | 1.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.Q8_0.gguf) | Q8_0 | 2.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NOVA-3B-V4-GGUF/resolve/main/NOVA-3B-V4.f16.gguf) | f16 | 4.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aumosita/deepseek-moe-16b-base-Q5_K_M-GGUF | aumosita | "2025-02-02T04:21:27Z" | 40 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/deepseek-moe-16b-base",
"base_model:quantized:deepseek-ai/deepseek-moe-16b-base",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2025-02-02T04:20:29Z" | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL
base_model: deepseek-ai/deepseek-moe-16b-base
tags:
- llama-cpp
- gguf-my-repo
---
# aumosita/deepseek-moe-16b-base-Q5_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/deepseek-moe-16b-base`](https://huggingface.co/deepseek-ai/deepseek-moe-16b-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-moe-16b-base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aumosita/deepseek-moe-16b-base-Q5_K_M-GGUF --hf-file deepseek-moe-16b-base-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aumosita/deepseek-moe-16b-base-Q5_K_M-GGUF --hf-file deepseek-moe-16b-base-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aumosita/deepseek-moe-16b-base-Q5_K_M-GGUF --hf-file deepseek-moe-16b-base-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aumosita/deepseek-moe-16b-base-Q5_K_M-GGUF --hf-file deepseek-moe-16b-base-q5_k_m.gguf -c 2048
```
|
research-backup/bart-base-subjqa-vanilla-movies-qg | research-backup | "2022-12-04T10:07:42Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_subjqa",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-06-22T10:49:58Z" |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_subjqa
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
model-index:
- name: research-backup/bart-base-subjqa-vanilla-movies-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_subjqa
type: movies
args: movies
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 0.0
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 20.32
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 17.16
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 91.41
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 59.41
---
# Model Card of `research-backup/bart-base-subjqa-vanilla-movies-qg`
This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: movies) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base)
- **Language:** en
- **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (movies)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-base-subjqa-vanilla-movies-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-base-subjqa-vanilla-movies-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-subjqa-vanilla-movies-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json)
| | Score | Type | Dataset |
|:-----------|--------:|:-------|:-----------------------------------------------------------------|
| BERTScore | 91.41 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_1 | 11.04 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_2 | 6.37 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_3 | 1.36 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_4 | 0 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| METEOR | 17.16 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| MoverScore | 59.41 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| ROUGE_L | 20.32 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_subjqa
- dataset_name: movies
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: facebook/bart-base
- max_length: 512
- max_length_output: 32
- epoch: 1
- batch: 8
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-subjqa-vanilla-movies-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Aarifkhan/3d | Aarifkhan | "2024-04-14T14:04:08Z" | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"NSFW",
"lora",
"base_model:UnfilteredAI/NSFW-GEN-ANIME",
"base_model:adapter:UnfilteredAI/NSFW-GEN-ANIME",
"license:apache-2.0",
"region:us"
] | text-to-image | "2024-04-14T13:59:24Z" | ---
license: apache-2.0
tags:
- text-to-image
- NSFW
- lora
- diffusers
base_model: UnfilteredAI/NSFW-GEN-ANIME
instance_prompt: 3d style, 3d, 3d render, anime
--- |
Rajashreee/nasa-document-classifier | Rajashreee | "2024-02-23T16:41:59Z" | 89 | 0 | transformers | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-23T14:31:34Z" | ---
pipeline_tag: text-classification
--- |
hzx405416956/wav2vec2-base-finetuned-daps | hzx405416956 | "2024-02-13T09:16:28Z" | 146 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-02-13T08:14:12Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-finetuned-daps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-daps
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0242
- eval_accuracy: 1.0
- eval_runtime: 39.2251
- eval_samples_per_second: 0.51
- eval_steps_per_second: 0.51
- epoch: 1.51
- step: 68
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf | RichardErkhov | "2025-02-09T21:53:55Z" | 12 | 0 | null | [
"gguf",
"region:us"
] | null | "2025-02-09T21:53:21Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
QwQ-preview-tiny-random - GGUF
- Model creator: https://huggingface.co/yujiepan/
- Original model: https://huggingface.co/yujiepan/QwQ-preview-tiny-random/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [QwQ-preview-tiny-random.Q2_K.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q2_K.gguf) | Q2_K | 0.01GB |
| [QwQ-preview-tiny-random.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.IQ3_XS.gguf) | IQ3_XS | 0.01GB |
| [QwQ-preview-tiny-random.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.IQ3_S.gguf) | IQ3_S | 0.01GB |
| [QwQ-preview-tiny-random.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q3_K_S.gguf) | Q3_K_S | 0.01GB |
| [QwQ-preview-tiny-random.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.IQ3_M.gguf) | IQ3_M | 0.01GB |
| [QwQ-preview-tiny-random.Q3_K.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q3_K.gguf) | Q3_K | 0.01GB |
| [QwQ-preview-tiny-random.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q3_K_M.gguf) | Q3_K_M | 0.01GB |
| [QwQ-preview-tiny-random.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q3_K_L.gguf) | Q3_K_L | 0.01GB |
| [QwQ-preview-tiny-random.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.IQ4_XS.gguf) | IQ4_XS | 0.01GB |
| [QwQ-preview-tiny-random.Q4_0.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q4_0.gguf) | Q4_0 | 0.01GB |
| [QwQ-preview-tiny-random.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.IQ4_NL.gguf) | IQ4_NL | 0.01GB |
| [QwQ-preview-tiny-random.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q4_K_S.gguf) | Q4_K_S | 0.01GB |
| [QwQ-preview-tiny-random.Q4_K.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q4_K.gguf) | Q4_K | 0.01GB |
| [QwQ-preview-tiny-random.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q4_K_M.gguf) | Q4_K_M | 0.01GB |
| [QwQ-preview-tiny-random.Q4_1.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q4_1.gguf) | Q4_1 | 0.01GB |
| [QwQ-preview-tiny-random.Q5_0.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q5_0.gguf) | Q5_0 | 0.01GB |
| [QwQ-preview-tiny-random.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q5_K_S.gguf) | Q5_K_S | 0.01GB |
| [QwQ-preview-tiny-random.Q5_K.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q5_K.gguf) | Q5_K | 0.01GB |
| [QwQ-preview-tiny-random.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q5_K_M.gguf) | Q5_K_M | 0.01GB |
| [QwQ-preview-tiny-random.Q5_1.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q5_1.gguf) | Q5_1 | 0.01GB |
| [QwQ-preview-tiny-random.Q6_K.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q6_K.gguf) | Q6_K | 0.01GB |
| [QwQ-preview-tiny-random.Q8_0.gguf](https://huggingface.co/RichardErkhov/yujiepan_-_QwQ-preview-tiny-random-gguf/blob/main/QwQ-preview-tiny-random.Q8_0.gguf) | Q8_0 | 0.01GB |
Original model description:
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
---
This model is for debugging. It is randomly initialized with the config from [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) but is of smaller size.
Codes:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch
import os
from huggingface_hub import create_repo, upload_folder
import accelerate
model_id = 'Qwen/QwQ-32B-Preview'
save_path = '/tmp/yujiepan/QwQ-preview-tiny-random'
repo_id = 'yujiepan/QwQ-preview-tiny-random'
os.system(f'rm -rf {save_path}')
config = transformers.AutoConfig.from_pretrained(
model_id,
trust_remote_code=True,
)
config._name_or_path = model_id
config.hidden_size = 8
config.intermediate_size = 16
config.num_key_value_heads = 1
config.num_attention_heads = 2
config.num_hidden_layers = 2
config.max_window_layers = 1
model = transformers.AutoModelForCausalLM.from_config(
config,
trust_remote_code=True,
)
model.generation_config = transformers.GenerationConfig.from_pretrained(
model_id)
model = model.to(torch.bfloat16)
transformers.set_seed(42)
num_params = 0
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
print(name, p.shape)
torch.nn.init.uniform_(p, -0.5, 0.5)
num_params += p.numel()
print("Total number of parameters:", num_params)
model.save_pretrained(save_path)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True,
)
tokenizer.save_pretrained(save_path)
os.system(f'ls -alh {save_path}')
create_repo(repo_id, exist_ok=True)
upload_folder(repo_id=repo_id, folder_path=save_path)
def try_example(model, tokenizer):
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
try_example(model, tokenizer)
```
|
saishf/Fimbulvetr-Kuro-Lotus-10.7B-GGUF | saishf | "2024-03-20T16:37:55Z" | 3,523 | 15 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:Sao10K/Fimbulvetr-10.7B-v1",
"base_model:merge:Sao10K/Fimbulvetr-10.7B-v1",
"base_model:saishf/Kuro-Lotus-10.7B",
"base_model:merge:saishf/Kuro-Lotus-10.7B",
"endpoints_compatible",
"region:us"
] | null | "2024-02-13T04:45:38Z" | ---
base_model:
- Sao10K/Fimbulvetr-10.7B-v1
- saishf/Kuro-Lotus-10.7B
library_name: transformers
tags:
- mergekit
- merge
---
# **This repo is broken. use https://huggingface.co/Bakanayatsu/Fimbulvetr-Kuro-Lotus-10.7B-GGUF-imatrix**
---------------------------------------
GGUFs' for https://huggingface.co/saishf/Fimbulvetr-Kuro-Lotus-10.7B
This model is a merge of my personal favourite models, i couldn't decide between them so why not have both? Without MOE cause gpu poor :3
With my own tests it gives kuro-lotus like results without the requirement for a highly detailed character card and stays coherent when roping up to 8K context.
I personally use the "Universal Light" preset in silly tavern, with "alpaca" the results can be short but are longer with "alpaca roleplay".
"Universal Light" preset can be extremely creative but sometimes likes to act for user with some cards, for those i like just the "default" but any preset seems to work!
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
* [saishf/Kuro-Lotus-10.7B](https://huggingface.co/saishf/Kuro-Lotus-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: saishf/Kuro-Lotus-10.7B
layer_range: [0, 48]
- model: Sao10K/Fimbulvetr-10.7B-v1
layer_range: [0, 48]
merge_method: slerp
base_model: saishf/Kuro-Lotus-10.7B
parameters:
t:
- filter: self_attn
value: [0.6, 0.7, 0.8, 0.9, 1]
- filter: mlp
value: [0.4, 0.3, 0.2, 0.1, 0]
- value: 0.5
dtype: bfloat16
```
|
bigmorning/whisper_syl_cv12_pad_lob100_low__0045 | bigmorning | "2023-08-25T18:22:57Z" | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-08-25T18:22:49Z" | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100_low__0045
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100_low__0045
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0920
- Train Accuracy: 0.0358
- Train Wermet: 0.0221
- Validation Loss: 0.6645
- Validation Accuracy: 0.0231
- Validation Wermet: 0.2530
- Epoch: 44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.2930 | 0.0113 | 2.0658 | 3.9415 | 0.0117 | 0.9401 | 0 |
| 4.6215 | 0.0121 | 0.8917 | 3.7803 | 0.0120 | 0.9294 | 1 |
| 4.4086 | 0.0128 | 0.8403 | 3.6070 | 0.0124 | 0.9223 | 2 |
| 4.1842 | 0.0135 | 0.8337 | 3.4291 | 0.0128 | 0.8867 | 3 |
| 3.9981 | 0.0141 | 0.8182 | 3.3251 | 0.0131 | 0.8750 | 4 |
| 3.8531 | 0.0145 | 0.8058 | 3.2385 | 0.0133 | 0.8699 | 5 |
| 3.7345 | 0.0149 | 0.7925 | 3.1751 | 0.0134 | 0.8665 | 6 |
| 3.6307 | 0.0152 | 0.7851 | 3.1031 | 0.0136 | 0.8507 | 7 |
| 3.5437 | 0.0155 | 0.7717 | 3.0752 | 0.0138 | 0.8286 | 8 |
| 3.4649 | 0.0157 | 0.7651 | 3.0334 | 0.0139 | 0.8417 | 9 |
| 3.3926 | 0.0159 | 0.7531 | 3.0022 | 0.0139 | 0.8413 | 10 |
| 3.3262 | 0.0162 | 0.7462 | 2.9669 | 0.0140 | 0.8264 | 11 |
| 3.2625 | 0.0164 | 0.7367 | 2.9342 | 0.0141 | 0.8520 | 12 |
| 3.1979 | 0.0166 | 0.7231 | 2.9046 | 0.0144 | 0.8196 | 13 |
| 3.1319 | 0.0169 | 0.7133 | 2.8607 | 0.0145 | 0.8026 | 14 |
| 3.0616 | 0.0172 | 0.7007 | 2.8165 | 0.0146 | 0.7788 | 15 |
| 2.9792 | 0.0176 | 0.6816 | 2.7552 | 0.0149 | 0.7643 | 16 |
| 2.8905 | 0.0180 | 0.6641 | 2.6788 | 0.0151 | 0.7473 | 17 |
| 2.7749 | 0.0186 | 0.6424 | 2.5824 | 0.0155 | 0.7241 | 18 |
| 2.6263 | 0.0193 | 0.6159 | 2.4206 | 0.0161 | 0.7047 | 19 |
| 2.4352 | 0.0203 | 0.5829 | 2.2230 | 0.0168 | 0.6500 | 20 |
| 2.1941 | 0.0216 | 0.5411 | 2.0349 | 0.0175 | 0.5980 | 21 |
| 1.9184 | 0.0231 | 0.4922 | 1.7850 | 0.0184 | 0.5659 | 22 |
| 1.6174 | 0.0249 | 0.4371 | 1.5664 | 0.0192 | 0.5081 | 23 |
| 1.3542 | 0.0265 | 0.3851 | 1.3992 | 0.0199 | 0.4690 | 24 |
| 1.1499 | 0.0278 | 0.3408 | 1.2512 | 0.0205 | 0.4299 | 25 |
| 0.9878 | 0.0288 | 0.3029 | 1.1479 | 0.0209 | 0.4013 | 26 |
| 0.8600 | 0.0297 | 0.2735 | 1.0527 | 0.0213 | 0.3755 | 27 |
| 0.7516 | 0.0305 | 0.2441 | 0.9803 | 0.0216 | 0.3570 | 28 |
| 0.6626 | 0.0311 | 0.2197 | 0.9314 | 0.0219 | 0.3416 | 29 |
| 0.5863 | 0.0316 | 0.1993 | 0.8730 | 0.0221 | 0.3238 | 30 |
| 0.5187 | 0.0321 | 0.1775 | 0.8357 | 0.0223 | 0.3136 | 31 |
| 0.4608 | 0.0326 | 0.1610 | 0.8059 | 0.0224 | 0.3033 | 32 |
| 0.4087 | 0.0330 | 0.1467 | 0.7746 | 0.0226 | 0.2949 | 33 |
| 0.3642 | 0.0334 | 0.1298 | 0.7476 | 0.0227 | 0.2847 | 34 |
| 0.3221 | 0.0337 | 0.1168 | 0.7330 | 0.0228 | 0.2802 | 35 |
| 0.2837 | 0.0340 | 0.1030 | 0.7093 | 0.0229 | 0.2728 | 36 |
| 0.2509 | 0.0343 | 0.0882 | 0.6941 | 0.0229 | 0.2687 | 37 |
| 0.2209 | 0.0346 | 0.0747 | 0.6892 | 0.0230 | 0.2656 | 38 |
| 0.1934 | 0.0349 | 0.0670 | 0.6824 | 0.0230 | 0.2630 | 39 |
| 0.1688 | 0.0351 | 0.0542 | 0.6773 | 0.0230 | 0.2625 | 40 |
| 0.1469 | 0.0353 | 0.0429 | 0.6700 | 0.0231 | 0.2633 | 41 |
| 0.1268 | 0.0355 | 0.0365 | 0.6680 | 0.0231 | 0.2578 | 42 |
| 0.1086 | 0.0357 | 0.0284 | 0.6643 | 0.0231 | 0.2540 | 43 |
| 0.0920 | 0.0358 | 0.0221 | 0.6645 | 0.0231 | 0.2530 | 44 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Hedayat-Abrishami/ppo-CartPole-v1 | Hedayat-Abrishami | "2023-07-12T23:58:20Z" | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-12T23:51:42Z" | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 223.00 +/- 113.45
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'Name'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Hedayat-Abrishami/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Aleksandra/herbert-base-cased-finetuned-squad | Aleksandra | "2022-01-20T13:14:11Z" | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:04Z" | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: herbert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# herbert-base-cased-finetuned-squad
This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 233 | 1.2474 |
| No log | 2.0 | 466 | 1.1951 |
| 1.3459 | 3.0 | 699 | 1.2071 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
NAACL2022/spider-nq-ctx-encoder | NAACL2022 | "2022-07-09T19:20:32Z" | 4 | 4 | transformers | [
"transformers",
"pytorch",
"dpr",
"arxiv:2112.07708",
"endpoints_compatible",
"region:us"
] | null | "2022-07-09T18:59:17Z" | # Spider-NQ: Context Encoder
This is the context encoder of the model fine-tuned on Natural Questions (and initialized from Spider) discussed in our paper [Learning to Retrieve Passages without Supervision](https://arxiv.org/abs/2112.07708).
## Usage
We used weight sharing for the query encoder and passage encoder, so the same model should be applied for both.
**Note**! We format the passages similar to DPR, i.e. the title and the text are separated by a `[SEP]` token, but token
type ids are all 0-s.
An example usage:
```python
from transformers import AutoTokenizer, DPRContextEncoder
tokenizer = AutoTokenizer.from_pretrained("NAACL2022/spider-nq-ctx-encoder")
model = DPRContextEncoder.from_pretrained("NAACL2022/spider-nq-ctx-encoder")
title = "Sauron"
context = "Sauron is the title character and main antagonist of J. R. R. Tolkien's \"The Lord of the Rings\"."
input_dict = tokenizer(title, context, return_tensors="pt")
del input_dict["token_type_ids"]
outputs = model(**input_dict)
```
|
hgnoi/REy9e8snbPH30wml | hgnoi | "2024-05-25T15:06:40Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T15:04:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
munzurul/banglaASR | munzurul | "2025-03-05T16:46:25Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-05T16:46:22Z" | ---
license: apache-2.0
---
|
SenhorDasMoscas/acho-classification-15-01-2025 | SenhorDasMoscas | "2025-01-15T14:54:48Z" | 38 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T14:53:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-4bit-DPO | VivekChauhan06 | "2024-08-28T07:41:37Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct",
"base_model:quantized:VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-08-28T01:41:17Z" | ---
base_model: VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- dpo
---
# Uploaded model
- **Developed by:** VivekChauhan06
- **License:** apache-2.0
- **Finetuned from model :** VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
daniel40/73699bad-993c-454d-8a58-92dbb6f8c426 | daniel40 | "2025-02-07T08:00:25Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | "2025-02-07T07:52:53Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73699bad-993c-454d-8a58-92dbb6f8c426
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 73699bad-993c-454d-8a58-92dbb6f8c426
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
artificialguybr/pomological-watercolor-redmond-lora-for-sd-xl | artificialguybr | "2024-01-04T02:05:01Z" | 73 | 6 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"watercolor",
"style",
"styles",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2024-01-04T02:04:59Z" | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- watercolor
- style
- styles
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Pomological Watercolor
widget:
- text: 'illustrative drawing of a ANATOMY OF A butterfly, anatomy, pomological watercolor '
output:
url: >-
5146231.jpeg
- text: 'illustrative drawing of a ANATOMY OF A human hand, half, pomological watercolor '
output:
url: >-
5146230.jpeg
- text: 'illustrative drawing of a ANATOMY OF A CAT HEAD, anatomy, pomological watercolor '
output:
url: >-
5146237.jpeg
- text: 'illustrative drawing of a ANATOMY OF A OWL, half, pomological watercolor '
output:
url: >-
5146232.jpeg
- text: 'illustrative drawing of a ANATOMY OF A human hand, half, pomological watercolor '
output:
url: >-
5146234.jpeg
- text: 'illustrative drawing of a ANATOMY OF A Apple, half, pomological watercolor '
output:
url: >-
5146235.jpeg
- text: 'illustrative drawing of a ANATOMY OF A GRAPE, half, pomological watercolor '
output:
url: >-
5146233.jpeg
- text: 'illustrative drawing of a ANATOMY OF A GRAPE, half, pomological watercolor '
output:
url: >-
5146236.jpeg
---
# Pomological Watercolor Redmond Lora for SD XL
<Gallery />
## Model description
<h1 id="heading-28">Pomological Watercolor.Redmond is here!</h1><p>I'm grateful for the GPU time from <strong>Redmond.AI</strong> that allowed me to finish this LORA!</p><p>Want to test and have acess to all my AI Stuff? Check my <a rel="ugc" href="https://artificialguy.com/">website</a>! </p><p>This is a <strong>Pomological Watercolor </strong>LORA fine-tuned on <strong>SD XL 1.0.</strong></p><p>Test all my Loras <a target="_blank" rel="ugc" href="https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora">here</a> for free and unlimited. Thanks, HF, for Inference API!</p><p>The LORA has a high capacity to generate Pomological Watercolor in a wide variety of themes.<strong> It's a versatile LORA. </strong></p><p>I recommend gen in 1024x1024.</p><p>You can use detailed, minimalist, colorful, black and white as tag to control the results.</p><p><strong><u>The tag for the model:Pomological Watercolor</u></strong></p><p>I really hope you like the LORA and use it.</p><p>If you like the model and think it's worth it, you can make a donation to my <a target="_blank" rel="ugc" href="https://www.patreon.com/user?u=81570187">Patreon</a> or <a target="_blank" rel="ugc" href="https://ko-fi.com/jvkape">Ko-fi</a>.</p><p>Follow me in my twitter to know before all about new models:</p><p><a target="_blank" rel="ugc" href="https://twitter.com/artificialguybr/"><u>https://twitter.com/artificialguybr/</u></a></p>
## Trigger words
You should use `Pomological Watercolor` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/artificialguybr/pomological-watercolor-redmond-lora-for-sd-xl/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('artificialguybr/pomological-watercolor-redmond-lora-for-sd-xl', weight_name='PomologicalWatercolorRedmond.safetensors')
image = pipeline('illustrative drawing of a ANATOMY OF A GRAPE, half, pomological watercolor ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso10/815f39e6-1129-4412-874e-173748f8415f | lesso10 | "2025-03-03T12:01:57Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-01T15:22:32Z" | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 815f39e6-1129-4412-874e-173748f8415f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 815f39e6-1129-4412-874e-173748f8415f
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 3.8583 |
| 14.9012 | 0.0116 | 50 | 3.6963 |
| 13.7278 | 0.0233 | 100 | 3.3803 |
| 12.7416 | 0.0349 | 150 | 3.2353 |
| 12.8001 | 0.0466 | 200 | 3.1732 |
| 12.5463 | 0.0582 | 250 | 3.1284 |
| 13.1433 | 0.0698 | 300 | 3.1031 |
| 11.5117 | 0.0815 | 350 | 3.0864 |
| 13.3379 | 0.0931 | 400 | 3.0767 |
| 12.4363 | 0.1047 | 450 | 3.0752 |
| 12.2813 | 0.1164 | 500 | 3.0728 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AmirH98/dqn-SpaceInvadersNoFrameskip-v4 | AmirH98 | "2023-09-25T18:42:43Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-25T18:42:01Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 635.00 +/- 194.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AmirH98 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AmirH98 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AmirH98
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Andie-Elle-Full-X/Andie-Elle.viral.video.on.social.media.x.twitter.now | Andie-Elle-Full-X | "2025-02-19T19:16:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-19T19:16:02Z" | <a href="https://mswds.xyz/full-video/?v=Andie-Elle" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=Andie-Elle" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=Andie-Elle"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
jonathoncodd/ddpm-butterflies-128 | jonathoncodd | "2024-03-09T00:21:44Z" | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2024-03-07T16:21:20Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rtracey/test-trainer | rtracey | "2024-02-05T07:43:49Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-05T07:43:14Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: test-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
|
damgomz/ft_1_5e6_x8 | damgomz | "2024-07-13T14:32:11Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-17T15:29:52Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 103747.15308523178 |
| Emissions (Co2eq in kg) | 0.0627790291657781 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.2247907124118664 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.1080690817703804 |
| Consumed energy (kWh) | 1.3328597941822504 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1997132696890712 |
| Emissions (Co2eq in kg) | 0.040634301625049114 |
## Note
12 juillet 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/fp_bs32_lr1e4_x8 |
| model_name | ft_1_5e6_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 1 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.724807 | 0.165280 |
| 1 | 0.253592 | 0.219997 | 0.930048 |
| 2 | 0.156297 | 0.206511 | 0.931749 |
| 3 | 0.095176 | 0.246122 | 0.921916 |
| 4 | 0.045039 | 0.315395 | 0.922730 |
| 5 | 0.021005 | 0.378669 | 0.913637 |
| 6 | 0.010949 | 0.400137 | 0.917790 |
|
bdsqlsz/filter_nude | bdsqlsz | "2024-09-28T10:48:30Z" | 19 | 6 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:bdsqlsz/stable-diffusion-xl-anime-V5",
"base_model:adapter:bdsqlsz/stable-diffusion-xl-anime-V5",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-image | "2024-09-28T10:47:26Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/GTcSJ9XakAAAEaQ.jpg
- text: '-'
output:
url: images/GTcQOZLagAAwHQR.jpg
- text: '-'
output:
url: images/GTcJZnragAE0ssC.jpg
- text: '-'
output:
url: images/GTcInJfagAIIjV9.jpg
base_model: bdsqlsz/stable-diffusion-xl-anime-V5
instance_prompt: null
license: cc-by-nc-sa-4.0
---
# filter_nude
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/bdsqlsz/filter_nude/tree/main) them in the Files & versions tab.
|
TheBloke/orca_mini_v2_13b-GPTQ | TheBloke | "2023-08-21T01:47:14Z" | 14 | 19 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:psmathur/orca_minis_uncensored_dataset",
"arxiv:2306.02707",
"arxiv:2302.13971",
"arxiv:2304.12244",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-07-09T10:07:58Z" | ---
datasets:
- psmathur/orca_minis_uncensored_dataset
inference: false
language:
- en
library_name: transformers
license: other
model_type: llama
pipeline_tag: text-generation
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pankaj Mathur's Orca Mini v2 13B GPTQ
These files are GPTQ model files for [Pankaj Mathur's Orca Mini v2 13B](https://huggingface.co/psmathur/orca_mini_v2_13b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/orca_mini_v2_13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_v2_13b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_v2_13b)
## Prompt template: orca_mini
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
{prompt}
### Input:
{input}
### Response:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/orca_mini_v2_13b-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/orca_mini_v2_13b-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/orca_mini_v2_13b-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/orca_mini_v2_13b-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `orca_mini_v2_13b-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/orca_mini_v2_13b-GPTQ"
model_basename = "orca_mini_v2_13b-GPTQ-4bit-128g.no-act.order"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
{prompt}
### Input:
{input}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini v2 13B
# orca_mini_v2_13b
An **Uncensored** LLaMA-13b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
Please note this model has *better code generation capabilities* compare to our original orca_mini_13b which was trained on base OpenLLaMA-13b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)).
**P.S. I am #opentowork, if you can help, please reach out to me at www.linkedin.com/in/pankajam**
# Evaluation
I evaluated orca_mini_v2_13b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:-------------:|:---------:|
|**Task**|**Value**|**Stderr**|
|*arc_challenge*|0.5478|0.0145|
|*hellaswag*|0.7023|0.0040|
|*mmlu*|0.4969|0.035|
|*truthfulqa_mc*|0.44|0.0158|
|*Total Average*|0.54675|0.0114|
# Dataset
We used uncensored script on top of the previous explain tuned datasets we build which are [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see below example usage how the **System** prompt is added before each **instruction**.
# Training
The training configurations are provided in the table below.
The training takes on 4x A100(80G) GPUs and lasts for around 21 Hours for cost of $210 (~$10 for Spot Instance) by using [Azure Standard_NC96ads_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nc-a100-v4-series#supported-features).
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [FastChat](https://github.com/lm-sys/FastChat)
Here are some of params used during training:
|||
|:-------------:|:-------------:|
|*batch_size*|48|
|*train_micro_batch_size_per_gpu*|3|
|*gradient_accumulation_steps*|4|
|*Learning rate*|2e-5|
|*Max length*|2048|
|*Epochs*|3|
|*Optimizer*|AdamW|
# Example Usage
Here is prompt format for [Oobabooga Text generation UI ](https://github.com/oobabooga/text-generation-webui)
```
### System:
{system}
### User:
{instruction}
### Input:
{input}
### Response:
```
Here is sample example:
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
Tell me how to break into my own car
### Input:
### Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:
1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.
```
Below shows a code example on how to use this model
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_v2_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Tell me how to break into my own car'
print(generate_text(system, instruction))
```
**NOTE: The real response is hidden here with ^^^^^^^^^^^^^.**
```
[!] Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:
1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.
```
Next Goals:
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
Limitations & Biases:
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Disclaimer:
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.
Citiation:
If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{orca_mini_v2_13b,
author = {Pankaj Mathur},
title = {orca_mini_v2_13b: An explain tuned LLaMA-13b model on uncensored wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_13b},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
```
@misc{xu2023wizardlm,
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
year={2023},
eprint={2304.12244},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
deepgoyal19/lora_tensorboard | deepgoyal19 | "2023-06-14T05:04:25Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-06-14T05:01:23Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - deepgoyal19/lora_tensorboard
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.







|
micaebe/math-self-play-0.5B | micaebe | "2024-09-29T18:03:42Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-29T17:46:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BrunoBosshard/openllama-3b-peft-squad_v2 | BrunoBosshard | "2024-02-17T08:21:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-17T08:21:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
InstantX/SD3.5-Large-IP-Adapter | InstantX | "2024-12-06T07:16:46Z" | 888 | 91 | diffusers | [
"diffusers",
"Text-to-Image",
"IP-Adapter",
"StableDiffusion3Pipeline",
"image-generation",
"Stable Diffusion",
"text-to-image",
"en",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:finetune:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | "2024-11-07T03:56:20Z" | ---
license: other
license_name: stabilityai-ai-community
license_link: >-
https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Text-to-Image
- IP-Adapter
- StableDiffusion3Pipeline
- image-generation
- Stable Diffusion
base_model:
- stabilityai/stable-diffusion-3.5-large
---
# SD3.5-Large-IP-Adapter
This repository contains a IP-Adapter for SD3.5-Large model released by researchers from [InstantX Team](https://huggingface.co/InstantX), where image work just like text, so it may not be responsive or interfere with other text, but we do hope you enjoy this model, have fun and share your creative works with us [on Twitter](https://x.com/instantx_ai).
# Model Card
This is a regular IP-Adapter, where the new layers are added into all 38 blocks. We use [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) to encode image for its superior performance, and adopt a TimeResampler to project. The image token number is set to 64.
# Showcases
<div class="container">
<img src="./teasers/0.png" width="1024"/>
<img src="./teasers/1.png" width="1024"/>
</div>
# Inference
The code has not been integrated into diffusers yet, please use our local files at this moment.
```python
import torch
from PIL import Image
from models.transformer_sd3 import SD3Transformer2DModel
from pipeline_stable_diffusion_3_ipa import StableDiffusion3Pipeline
model_path = 'stabilityai/stable-diffusion-3.5-large'
ip_adapter_path = './ip-adapter.bin'
image_encoder_path = "google/siglip-so400m-patch14-384"
transformer = SD3Transformer2DModel.from_pretrained(
model_path, subfolder="transformer", torch_dtype=torch.bfloat16
)
pipe = StableDiffusion3Pipeline.from_pretrained(
model_path, transformer=transformer, torch_dtype=torch.bfloat16
).to("cuda")
pipe.init_ipadapter(
ip_adapter_path=ip_adapter_path,
image_encoder_path=image_encoder_path,
nb_token=64,
)
ref_img = Image.open('./assets/1.jpg').convert('RGB')
# please note that SD3.5 Large is sensitive to highres generation like 1536x1536
image = pipe(
width=1024,
height=1024,
prompt='a cat',
negative_prompt="lowres, low quality, worst quality",
num_inference_steps=24,
guidance_scale=5.0,
generator=torch.Generator("cuda").manual_seed(42),
clip_image=ref_img,
ipadapter_scale=0.5,
).images[0]
image.save('./result.jpg')
```
# Community ComfyUI Support
Please refer to [Slickytail/ComfyUI-InstantX-IPAdapter-SD3](https://github.com/Slickytail/ComfyUI-InstantX-IPAdapter-SD3).
# License
The model is released under [stabilityai-ai-community](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md). All copyright reserved.
# Acknowledgements
This project is sponsored by [HuggingFace](https://huggingface.co/) and [fal.ai](https://fal.ai/). Thanks to [Slickytail](https://github.com/Slickytail) for supporting ComfyUI node.
# Citation
If you find this project useful in your research, please cite us via
```
@misc{sd35-large-ipa,
author = {InstantX Team},
title = {InstantX SD3.5-Large IP-Adapter Page},
year = {2024},
}
```
|
ibrax/qwen2.5-32B_muslim_belief | ibrax | "2025-03-16T11:42:12Z" | 19 | 0 | null | [
"gguf",
"muslim",
"islam",
"religion",
"aqeedah",
"dawah",
"ar",
"en",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-26T14:18:08Z" | ---
license: apache-2.0
language:
- ar
- en
base_model:
- Qwen/Qwen2.5-32B-Instruct
tags:
- muslim
- islam
- religion
- aqeedah
- dawah
---
This is qwen2.5-32B-Instruct finetuned on islamic principles from a dataset initially curated from Youtube video captions.
The dataset is in the Arabic language, and as such the model should be preferably prompted in Arabic to get the best results. |
922-CA/l2-7b-yuri-ddlc-v0.1-gguf | 922-CA | "2023-09-09T06:33:08Z" | 3 | 0 | null | [
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2023-09-08T13:00:10Z" | ---
license: llama2
---
GGUFs of [l2-7b-yuri-ddlc-v0.1](https://huggingface.co/922-CA/l2-7b-yuri-ddlc-v0.1). (Primarily tested and run with Koboldcpp v1.41+).
QLora (hf and GGML) [here](https://huggingface.co/922-CA/yuri-lm-lora-tests/tree/main/l2-7b-yuri-v0.1). |
abhishek/autotrain-iaydp-g4ihr | abhishek | "2024-01-11T16:02:29Z" | 183 | 0 | transformers | [
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"dataset:abhishek/autotrain-data-autotrain-iaydp-g4ihr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-01-11T16:02:25Z" |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- abhishek/autotrain-data-autotrain-iaydp-g4ihr
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.061016949152542375
f1_micro: 0.18
f1_weighted: 0.054915254237288144
precision_macro: 0.036
precision_micro: 0.18
precision_weighted: 0.0324
recall_macro: 0.2
recall_micro: 0.18
recall_weighted: 0.18
accuracy: 0.18
|
ayoubkirouane/BERT-Emotions-Classifier | ayoubkirouane | "2023-09-23T21:35:59Z" | 389,797 | 9 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:sem_eval_2018_task_1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-23T18:47:29Z" | ---
datasets:
- sem_eval_2018_task_1
language:
- en
library_name: transformers
pipeline_tag: text-classification
---
## Description
The **BERT-Emotions-Classifier** is a fine-tuned **BERT-based** model designed for multi-label emotion classification. It has been trained on the sem_eval_2018_task_1 dataset, which includes text samples labeled with a variety of emotions, including anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, and trust. The model is capable of classifying text inputs into one or more of these emotion categories.
## Overview
+ **Model Name**: BERT-Emotions-Classifier
+ **Task**: Multi-label emotion classification
+ **Dataset**: sem_eval_2018_task_1
+ **Labels**: ['anger', 'anticipation', 'disgust', 'fear', 'joy', 'love', 'optimism', 'pessimism', 'sadness', 'surprise', 'trust']
+ **Base Model**: BERT (Bidirectional Encoder Representations from Transformers)
### Input Format
The model expects text input in the form of a string.
### Output Format
+ The model provides a list of labels and associated scores, indicating the predicted emotions and their confidence scores.
### Example Applications
+ Emotion analysis in social media posts
+ Sentiment analysis in customer reviews
+ Content recommendation based on emotional context
## Limitations
+ **Limited Emotion Categories**: The BERT-Emotions-Classifier model is trained on a specific set of emotion categories. It may not accurately classify emotions that do not fall within these predefined categories.
+ **Model Performance**: The accuracy of emotion classification depends on the quality and diversity of the training data. The model's performance may vary for text inputs with uncommon or complex emotional expressions.
+ **Bias and Fairness**: Like any machine learning model, the BERT-Emotions-Classifier may exhibit bias in its predictions. Care should be taken to address and mitigate bias in real-world applications to ensure fairness and inclusivity.
+ **Input Length**: The model has limitations on the maximum input text length it can process effectively. Very long texts may be truncated or may not receive accurate classifications.
## Ethical Considerations
When using this model, it's essential to consider the ethical implications of emotion analysis. Ensure that the use of emotional data respects privacy and consent, and avoid making decisions that could have adverse effects based solely on emotion analysis.
## Inference
```python
from transformers import pipeline
# Load the BERT-Emotions-Classifier
classifier = pipeline("text-classification", model="ayoubkirouane/BERT-Emotions-Classifier")
# Input text
text = "Your input text here"
# Perform emotion classification
results = classifier(text)
# Display the classification results
print(results)
``` |
ylacombe/parler_tts_mini_v0.1 | ylacombe | "2024-05-29T11:50:30Z" | 62 | 1 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng_10k",
"dataset:blabble-io/libritts_r",
"dataset:parler-tts/libritts_r_tags_tagged_10k_generated",
"dataset:parler-tts/mls-eng-10k-tags_tagged_10k_generated",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2024-05-29T11:49:24Z" | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng_10k
- blabble-io/libritts_r
- parler-tts/libritts_r_tags_tagged_10k_generated
- parler-tts/mls-eng-10k-tags_tagged_10k_generated
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v0.1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts_mini">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
* **Fine-tuning guide on Colab:**
<a target="_blank" href="https://colab.research.google.com/github/ylacombe/scripts_and_notebooks/blob/main/Finetuning_Parler_TTS_on_a_single_speaker_dataset.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
**Parler-TTS Mini v0.1** is a lightweight text-to-speech (TTS) model, trained on 10.5K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
It is the first release model from the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## Usage
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
You can then use the model with the following inference snippet:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler_tts_mini_v0.1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler_tts_mini_v0.1")
prompt = "Hey, how are you doing today?"
description = "A female speaker with a slightly low-pitched voice delivers her words quite expressively, in a very confined sounding environment with clear audio quality. She speaks very fast."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license.
|
OsherElhadad/ppo-Pyramids | OsherElhadad | "2024-02-16T18:26:17Z" | 0 | 0 | ml-agents | [
"ml-agents",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2024-02-16T18:25:13Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: OsherElhadad/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
arvnoodle/hcl-zephyr-7b-javascript-lotuscript-GGUF | arvnoodle | "2024-03-20T03:47:59Z" | 24 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:quantized:HuggingFaceH4/zephyr-7b-beta",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-03-20T03:44:17Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Uploaded model
- **Developed by:** arvnoodle
- **License:** apache-2.0
- **Finetuned from model :** HuggingFaceH4/zephyr-7b-beta
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vocabtrimmer/mt5-small-trimmed-en-15000-squad-qg | vocabtrimmer | "2023-03-30T00:54:10Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-03-30T00:53:10Z" |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-en-15000-squad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 22.43
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 49.7
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 24.33
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 89.95
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 63.14
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-en-15000-squad-qg`
This model is fine-tuned version of [ckpts/mt5-small-trimmed-en-15000](https://huggingface.co/ckpts/mt5-small-trimmed-en-15000) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mt5-small-trimmed-en-15000](https://huggingface.co/ckpts/mt5-small-trimmed-en-15000)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="vocabtrimmer/mt5-small-trimmed-en-15000-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-en-15000-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-15000-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 89.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 54.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 38.22 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 28.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 22.43 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 24.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 49.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: ckpts/mt5-small-trimmed-en-15000
- max_length: 512
- max_length_output: 32
- epoch: 13
- batch: 16
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-en-15000-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
mariusweiss/TaxBERT | mariusweiss | "2025-02-24T08:58:55Z" | 1,106 | 3 | null | [
"safetensors",
"roberta",
"en",
"license:mit",
"region:us"
] | null | "2025-02-19T13:28:49Z" | ---
license: mit
language:
- en
---
# TaxBERT
This repository accompanies the paper: Hechtner, F., Schmidt, L., Seebeck, A., & Weiß, M. (2025). How to design and employ specialized large language models for accounting and tax research: The example of TaxBERT.
TaxBERT is a domain-adapated RoBERTa model, specifically designed to analyze qualitative corporate tax disclosures.
In the future, we will add the following features:
- Tax Sentence Recognition
- Tax Risk Sentiment
**SSRN**: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5146523
The paper provides an ‘A-to-Z’ description of how to design and employ specialized Bidirectional Encoder Representation of Transformers (BERT) models that are environmentally sustainable and practically feasible for accounting and tax researchers.
**GitHub**: https://github.com/TaxBERT/TaxBERT
If the following Guide/Repository is used for academic or scientific purposes, please cite the paper. |
irishprancer/feca6895-2eff-4532-98a6-afc1eeeec183 | irishprancer | "2025-03-03T02:29:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-02T19:37:10Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrovejaxd/goemotions_bertspanish_finetunig_g | mrovejaxd | "2024-09-05T18:35:19Z" | 29 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-20T17:13:44Z" | ---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: goemotions_bertspanish_finetunig_g
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspanish_finetunig_g
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9315
- Accuracy: 0.4824
- F1: 0.2176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 2.3738 | 1.0 | 5427 | 2.2950 | 0.4148 | 0.1014 |
| 2.1024 | 2.0 | 10854 | 2.0699 | 0.4522 | 0.1610 |
| 1.9847 | 3.0 | 16281 | 1.9315 | 0.4824 | 0.2176 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
JaehyeokLee/20m_em_checkpoint_epoch_1_step_2760 | JaehyeokLee | "2025-02-24T05:11:08Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"arxiv:2402.03216",
"arxiv:2004.04906",
"arxiv:2106.14807",
"arxiv:2107.05720",
"arxiv:2004.12832",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-24T04:07:07Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3))
In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
- Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
- Multi-Linguality: It can support more than 100 working languages.
- Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
**Some suggestions for retrieval pipeline in RAG:**
We recommend to use following pipeline: hybrid retrieval + re-ranking.
- Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities.
A classic example: using both embedding retrieval and the BM25 algorithm.
Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval.
This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings.
- As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model.
Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
## News:
- 2/6/2024: We release the [MLDR](https://huggingface.co/datasets/Shitao/MLDR) (a long document retrieval dataset covering 13 languages) and [evaluation pipeline](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR).
- 2/1/2024: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb)
## Specs
- Model
| Model Name | Dimension | Sequence Length | Introduction |
|:----:|:---:|:---:|:---:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 | multilingual; unified fine-tuning (dense, sparse, and colbert) from bge-m3-unsupervised|
| [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised) | 1024 | 8192 | multilingual; contrastive learning from bge-m3-retromae |
| [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) | -- | 8192 | multilingual; extend the max_length of [xlm-roberta](https://huggingface.co/FacebookAI/xlm-roberta-large) to 8192 and further pretrained via [retromae](https://github.com/staoxiao/RetroMAE)|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | English model |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | English model |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | English model |
- Data
| Dataset | Introduction |
|:----:|:---:|
| [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Docuemtn Retrieval Dataset, covering 13 languages|
## FAQ
**1. Introduction for different retrieval methods**
- Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding)
- Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
- Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
**2. Comparison with BGE-v1.5 and other monolingual models**
BGE-M3 is a multilingual model, and its ability in monolingual embedding retrieval may not surpass models specifically designed for single languages.
However, we still recommend trying BGE-M3 because of its versatility (support for multiple languages and long texts).
Moreover, it can simultaneously generate multiple representations, and using them together can enhance accuracy and generalization,
unlike most existing models that can only perform dense retrieval.
In the open-source community, there are many excellent models (e.g., jina-embedding, colbert, e5, etc),
and users can choose a model that suits their specific needs based on practical considerations,
such as whether to require multilingual or cross-language support, and whether to process long texts.
**3. How to use BGE-M3 in other projects?**
For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
For sparse retrieval methods, most open-source libraries currently do not support direct utilization of the BGE-M3 model.
Contributions from the community are welcome.
In our experiments, we use [Pyserini](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR#hybrid-retrieval-dense--sparse) and Faiss to do hybrid retrieval.
**Now you can ou can try the hybrid mode of BGE-M3 in [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb
). Thanks @jobergum.**
**4. How to fine-tune bge-M3 model?**
You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
to fine-tune the dense embedding.
Our code and data for unified fine-tuning (dense, sparse, and multi-vectors) will be released.
## Usage
Install:
```
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
```
or:
```
pip install -U FlagEmbedding
```
### Generate Embedding for text
- Dense Embedding
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3',
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1,
batch_size=12,
max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
)['dense_vecs']
embeddings_2 = model.encode(sentences_2)['dense_vecs']
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# [[0.6265, 0.3477], [0.3499, 0.678 ]]
```
You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
- Sparse Embedding (Lexical Weight)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
# you can see the weight for each token:
print(model.convert_id_to_token(output_1['lexical_weights']))
# [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092},
# {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}]
# compute the scores via lexical mathcing
lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
print(lexical_scores)
# 0.19554901123046875
print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
# 0.0
```
- Multi-Vector (ColBERT)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
# 0.7797
# 0.4620
```
### Compute score for text pairs
Input a list of text pairs, you can get the scores computed by different methods.
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
print(model.compute_score(sentence_pairs,
max_passage_length=128, # a smaller max length leads to a lower latency
weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
# {
# 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
# 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625],
# 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
# 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816],
# 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478]
# }
```
## Evaluation
- Multilingual (Miracl dataset)

- Cross-lingual (MKQA dataset)

- Long Document Retrieval
- MLDR:

Please note that [MLDR](https://huggingface.co/datasets/Shitao/MLDR) is a document retrieval dataset we constructed via LLM,
covering 13 languages, including test set, validation set, and training set.
We utilized the training set from MLDR to enhance the model's long document retrieval capabilities.
Therefore, comparing baselines with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable.
Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets.
We believe that this data will be helpful for the open-source community in training document retrieval models.
- NarritiveQA:

## Training
- Self-knowledge Distillation: combining multiple outputs from different
retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival)
- Efficient Batching: Improve the efficiency when fine-tuning on long text.
The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model.
- MCLS: A simple method to improve the performance on long text without fine-tuning.
If you have no enough resource to fine-tuning model with long text, the method is useful.
Refer to our [report](https://arxiv.org/pdf/2402.03216.pdf) for more details.
**The fine-tuning codes and datasets will be open-sourced in the near future.**
## Acknowledgement
Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
Thanks the open-sourced libraries like [Tevatron](https://github.com/texttron/tevatron), [pyserial](https://github.com/pyserial/pyserial).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge-m3,
title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu},
year={2024},
eprint={2402.03216},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BigSalmon/MediumInformalToFormalLincoln3 | BigSalmon | "2022-04-11T20:58:29Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-04-11T20:49:42Z" | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln3")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln3")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln35")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln35")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
(makes two sentences, one sentence) (probably will not work all that well)
```
initial: phone books used to be everywhere. they have been replaced by the internet.
combined: once ubiquitous, phone books have been supplanted by the internet.
***
initial:
```
```
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
Keywords to sentences or sentence. |
SteelStorage/VerA-Etheria-55b | SteelStorage | "2024-01-28T14:24:33Z" | 8 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"Etheria",
"base_model:brucethemoose/Yi-34B-200K-DARE-megamerge-v8",
"base_model:finetune:brucethemoose/Yi-34B-200K-DARE-megamerge-v8",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-25T10:27:53Z" | ---
tags:
- merge
- mergekit
- Etheria
base_model:
- brucethemoose/Yi-34B-200K-DARE-megamerge-v8
license: apache-2.0
---
# VerA-Etheria-55b

An attempt to make a functional goliath style merge with One yi-34b-200k model Merged to make a [Etheria] 55b-200k Model, this is Version A or VerA, it is a single
Model Passthrough merge.
# Roadmap:
Depending on quality, I Might private the other Version. Then generate a sacrificial 55b and perform a 55b Dare ties merge or Slerp merge.
1: If the Dual Model Merge performs well I will make a direct inverse of the config then merge.
2: If the single model performs well I will generate a 55b of the most performant model then either Slerp or Dare ties merge.
3: If both models perform well, then I will complete both 1 & 2 then change the naming scheme to match each of the new models.
## 🧩 Configuration
```yaml
dtype: bfloat16
slices:
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [0, 14]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [7, 21]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [15, 29]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [22, 36]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [30, 44]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [37, 51]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [45, 59]
merge_method: passthrough
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "steelskull/VA-Etheria-55b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
krnl/airup0 | krnl | "2025-03-11T09:06:10Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] | text-to-image | "2025-03-11T09:05:57Z" | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: undefined
instance_prompt: airup
license: other
---
# airup0
<Gallery />
## Model description
## Trigger words
You should use `airup` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/krnl/airup0/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-portrait-trainer](https://fal.ai/models/fal-ai/flux-lora-portrait-trainer).
|
abideen/Llama3.1-8B-unsloth-qlora | abideen | "2024-09-14T00:30:19Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-14T00:25:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso15/b8123665-212e-46cd-adfe-1afaad380dd7 | lesso15 | "2025-02-22T09:30:56Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-22T09:10:27Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b8123665-212e-46cd-adfe-1afaad380dd7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a8495f9f9fda745c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a8495f9f9fda745c_train_data.json
type:
field_input: alpaca_prompt
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso15/b8123665-212e-46cd-adfe-1afaad380dd7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000215
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/a8495f9f9fda745c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 150
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21774842-970c-4ce4-a4e0-bf1d716f57eb
wandb_project: 15a
wandb_run: your_name
wandb_runid: 21774842-970c-4ce4-a4e0-bf1d716f57eb
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b8123665-212e-46cd-adfe-1afaad380dd7
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000215
- train_batch_size: 4
- eval_batch_size: 4
- seed: 150
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.1326 |
| 1.7476 | 0.0067 | 50 | 1.7912 |
| 1.8731 | 0.0134 | 100 | 1.8299 |
| 1.6484 | 0.0202 | 150 | 1.7794 |
| 1.6255 | 0.0269 | 200 | 1.7531 |
| 1.7713 | 0.0336 | 250 | 1.7302 |
| 1.6672 | 0.0403 | 300 | 1.7271 |
| 1.5864 | 0.0470 | 350 | 1.7131 |
| 1.6502 | 0.0537 | 400 | 1.7113 |
| 1.7362 | 0.0605 | 450 | 1.7121 |
| 1.8662 | 0.0672 | 500 | 1.7140 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lilianz/q-Taxi-v3 | lilianz | "2024-01-14T11:29:02Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-14T10:55:36Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="lilianz/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AustinCarthy/Onlyphish_100KP_BFall_fromB_20KGen_topP_0.75 | AustinCarthy | "2023-05-21T14:16:37Z" | 163 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-21T06:03:08Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromB_20KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromB_20KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0187
- Accuracy: 0.9974
- F1: 0.9714
- Precision: 0.9987
- Recall: 0.9456
- Roc Auc Score: 0.9728
- Tpr At Fpr 0.01: 0.9596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0036 | 1.0 | 78750 | 0.0305 | 0.9963 | 0.9593 | 0.9991 | 0.9226 | 0.9613 | 0.9348 |
| 0.0074 | 2.0 | 157500 | 0.0234 | 0.9967 | 0.9643 | 0.9947 | 0.9358 | 0.9678 | 0.0 |
| 0.0038 | 3.0 | 236250 | 0.0244 | 0.9967 | 0.9637 | 0.9987 | 0.931 | 0.9655 | 0.9352 |
| 0.0009 | 4.0 | 315000 | 0.0223 | 0.9970 | 0.9678 | 0.9991 | 0.9384 | 0.9692 | 0.9632 |
| 0.0011 | 5.0 | 393750 | 0.0187 | 0.9974 | 0.9714 | 0.9987 | 0.9456 | 0.9728 | 0.9596 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Metric-AI/colqwen2.5-7b-base | Metric-AI | "2025-02-23T18:35:19Z" | 0 | 0 | colpali | [
"colpali",
"safetensors",
"qwen2_5_vl",
"en",
"arxiv:2004.12832",
"arxiv:2407.01449",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-23T18:32:21Z" | ---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
language:
- en
library_name: colpali
license: apache-2.0
---
# ColQwen2.5-7b: Visual Retriever based on Qwen2.5-VL-7B-Instruct with ColBERT strategy
ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
This version is the untrained base version to guarantee deterministic projection layer initialization.
## Usage
> [!WARNING]
> This version should not be used: it is solely the base version useful for deterministic LoRA initialization.
## Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
```
Developed by: Metric AI Research Lab
|
philadelphiacredit/Credit-Repair-Philadelphia | philadelphiacredit | "2022-10-26T08:34:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-10-26T08:32:38Z" | We’re not your average credit repair firm, we truly care, so we only charge for the items we pursue on your report. Not only does this make us one of the FASTEST credit restoration companies, but we’re also one of the most affordable.
We offer FREE consultations, evaluations, and credit education. Our process only takes 30-60 days and we offer a 100% MONEY-BACK GUARANTEE on almost all our services.
Follow this [link](https://philadelphia.asapcreditrepairusa.com/) |
hezronling/Qwen2.5-Mental-Health-Bot-0.5B-r32a64e3 | hezronling | "2025-01-21T08:38:00Z" | 30 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-01-21T08:37:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YoongCheyang/qwen2vl-7b-2MoEP | YoongCheyang | "2025-02-25T12:52:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-02-25T11:56:30Z" | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2vl-7b-2MoEP
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2vl-7b-2MoEP
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="YoongCheyang/qwen2vl-7b-2MoEP", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zh4ngggg/huggingface/runs/t6c454h1)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AlignmentResearch/robust_llm_pythia-tt-14m-mz-ada-v3-ch-126000 | AlignmentResearch | "2024-03-22T21:07:57Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-22T21:07:51Z" | ---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_pythia-tt-14m-mz-ada-v3-ch-126000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-tt-14m-mz-ada-v3-ch-126000
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Subsets and Splits