modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 18:27:38
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 18:27:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
duyvt6663/vietcuna-3b_1024 | duyvt6663 | 2023-11-09T15:36:55Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"region:us"
]
| null | 2023-10-24T06:11:59Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vietcuna-3b_2048
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vietcuna-3b_2048
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5250
- Accuracy: 0.7375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.18
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5694 | 1.05 | 50 | 0.5834 | 0.7087 |
| 0.5614 | 2.1 | 100 | 0.5772 | 0.7165 |
| 0.5475 | 3.15 | 150 | 0.5684 | 0.7165 |
| 0.5503 | 4.2 | 200 | 0.5605 | 0.7087 |
| 0.5305 | 5.25 | 250 | 0.5784 | 0.7192 |
| 0.5353 | 6.3 | 300 | 0.5451 | 0.7323 |
| 0.5063 | 7.35 | 350 | 0.5441 | 0.7270 |
| 0.5141 | 8.4 | 400 | 0.5365 | 0.7244 |
| 0.5035 | 9.45 | 450 | 0.5354 | 0.7297 |
| 0.493 | 10.5 | 500 | 0.5322 | 0.7297 |
| 0.4763 | 11.55 | 550 | 0.5299 | 0.7375 |
| 0.5063 | 12.6 | 600 | 0.5295 | 0.7375 |
| 0.4787 | 13.65 | 650 | 0.5280 | 0.7297 |
| 0.4841 | 14.7 | 700 | 0.5266 | 0.7375 |
| 0.4732 | 15.75 | 750 | 0.5283 | 0.7297 |
| 0.4801 | 16.8 | 800 | 0.5259 | 0.7375 |
| 0.4651 | 17.85 | 850 | 0.5256 | 0.7375 |
| 0.4726 | 18.9 | 900 | 0.5260 | 0.7323 |
| 0.4758 | 19.95 | 950 | 0.5248 | 0.7375 |
| 0.4701 | 21.0 | 1000 | 0.5250 | 0.7375 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ArtiKitten/q-Taxi-v3 | ArtiKitten | 2023-11-09T15:34:12Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T15:34:10Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ArtiKitten/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ayuff/layoutlmv3-finetuned-cord_100 | ayuff | 2023-11-09T15:31:26Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-11-09T14:35:04Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: test
args: cord
metrics:
- name: Precision
type: precision
value: 0.9458054936896808
- name: Recall
type: recall
value: 0.9535928143712575
- name: F1
type: f1
value: 0.9496831904584422
- name: Accuracy
type: accuracy
value: 0.9588285229202037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2033
- Precision: 0.9458
- Recall: 0.9536
- F1: 0.9497
- Accuracy: 0.9588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 250 | 1.0015 | 0.7227 | 0.7822 | 0.7513 | 0.7963 |
| 1.3862 | 3.12 | 500 | 0.5334 | 0.8591 | 0.8765 | 0.8677 | 0.8837 |
| 1.3862 | 4.69 | 750 | 0.3689 | 0.8925 | 0.9072 | 0.8998 | 0.9164 |
| 0.3835 | 6.25 | 1000 | 0.2877 | 0.9281 | 0.9371 | 0.9326 | 0.9431 |
| 0.3835 | 7.81 | 1250 | 0.2506 | 0.9312 | 0.9424 | 0.9368 | 0.9452 |
| 0.2048 | 9.38 | 1500 | 0.2373 | 0.9480 | 0.9543 | 0.9511 | 0.9554 |
| 0.2048 | 10.94 | 1750 | 0.2184 | 0.9379 | 0.9491 | 0.9435 | 0.9542 |
| 0.1365 | 12.5 | 2000 | 0.2057 | 0.9393 | 0.9506 | 0.9449 | 0.9567 |
| 0.1365 | 14.06 | 2250 | 0.2024 | 0.9487 | 0.9543 | 0.9515 | 0.9576 |
| 0.1067 | 15.62 | 2500 | 0.2033 | 0.9458 | 0.9536 | 0.9497 | 0.9588 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
LazzeKappa/L08 | LazzeKappa | 2023-11-09T15:29:39Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-11-02T21:59:54Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: L08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# L08
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5752 | 1.0 | 46 | 0.5406 |
| 0.5341 | 2.0 | 92 | 0.5026 |
| 0.5516 | 3.0 | 138 | 0.4957 |
| 0.4672 | 4.0 | 184 | 0.4935 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ryantaw/bert-small-finetuned-finetuned | ryantaw | 2023-11-09T15:26:37Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ryantaw/bert-small-finetuned",
"base_model:finetune:ryantaw/bert-small-finetuned",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-09T03:11:16Z | ---
license: mit
base_model: ryantaw/bert-small-finetuned
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-small-finetuned-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-finetuned
This model is a fine-tuned version of [ryantaw/bert-small-finetuned](https://huggingface.co/ryantaw/bert-small-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0767
- Accuracy: 0.6119
- F1 Score: 0.6156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 86
- eval_batch_size: 86
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.7125 | 1.0 | 18 | 1.0136 | 0.6011 | 0.5997 |
| 0.604 | 2.0 | 36 | 1.0198 | 0.6038 | 0.6058 |
| 0.5421 | 3.0 | 54 | 1.0517 | 0.6065 | 0.6068 |
| 0.4724 | 4.0 | 72 | 1.0767 | 0.6119 | 0.6156 |
| 0.42 | 5.0 | 90 | 1.1184 | 0.5768 | 0.5751 |
| 0.3823 | 6.0 | 108 | 1.1217 | 0.5876 | 0.5881 |
| 0.3312 | 7.0 | 126 | 1.1425 | 0.6065 | 0.6053 |
| 0.3045 | 8.0 | 144 | 1.1760 | 0.6065 | 0.6095 |
| 0.2662 | 9.0 | 162 | 1.2044 | 0.6065 | 0.6090 |
| 0.2403 | 10.0 | 180 | 1.2143 | 0.6011 | 0.6011 |
| 0.2308 | 11.0 | 198 | 1.2394 | 0.5903 | 0.5927 |
| 0.2053 | 12.0 | 216 | 1.2589 | 0.6038 | 0.6068 |
| 0.1808 | 13.0 | 234 | 1.2895 | 0.6065 | 0.6071 |
| 0.1599 | 14.0 | 252 | 1.3144 | 0.6065 | 0.6086 |
| 0.1497 | 15.0 | 270 | 1.3386 | 0.5930 | 0.5951 |
| 0.1383 | 16.0 | 288 | 1.3608 | 0.5903 | 0.5931 |
| 0.1321 | 17.0 | 306 | 1.3624 | 0.5876 | 0.5888 |
| 0.1183 | 18.0 | 324 | 1.3810 | 0.5930 | 0.5945 |
| 0.1196 | 19.0 | 342 | 1.3827 | 0.5903 | 0.5927 |
| 0.1181 | 20.0 | 360 | 1.3805 | 0.5903 | 0.5920 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
chineidu/bert-finetuned-ner | chineidu | 2023-11-09T15:25:07Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-10-27T04:19:33Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9319043693322341
- name: Recall
type: recall
value: 0.9511948838774823
- name: F1
type: f1
value: 0.941450820354793
- name: Accuracy
type: accuracy
value: 0.9863130629304763
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Precision: 0.9319
- Recall: 0.9512
- F1: 0.9415
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0791 | 1.0 | 1756 | 0.0664 | 0.9101 | 0.9371 | 0.9234 | 0.9816 |
| 0.0398 | 2.0 | 3512 | 0.0604 | 0.9274 | 0.9483 | 0.9378 | 0.9854 |
| 0.025 | 3.0 | 5268 | 0.0591 | 0.9319 | 0.9512 | 0.9415 | 0.9863 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ArtiKitten/ppo-LunarLander-v2 | ArtiKitten | 2023-11-09T15:03:28Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T15:03:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.42 +/- 23.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DContrerasF/ppo-Huggy | DContrerasF | 2023-11-09T15:02:38Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-11-09T15:02:33Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DContrerasF/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
audreyt/Taiwan-LLM-13B-v2.0-chat-GGUF | audreyt | 2023-11-09T14:58:04Z | 77 | 7 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-11-09T14:51:42Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
license: apache-2.0
language:
- zh
widget:
- text: >-
A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
inference: false
quantized_by: audreyt
---
# Taiwan-LLM-13B-v2.0-chat-GGUF - GGUF
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat)
## Description
This repo contains GGUF format model files for Yen-Ting Lin's [ Taiwan LLM based on LLaMa2-13b](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat).
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author.
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者。
## About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- footer start -->
<!-- footer end -->
# Original model card
---
# 🌟 Checkout New [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Taiwan LLM based on LLaMa2-13b
continue pretraining on 20 billion tokens in traditional Mandarin and instruction fine-tuning on millions of conversations.
This version does NOT include CommonCrawl.
# Collaboration with Ubitus K.K. 💪💪💪
本項目與 Ubitus K.K. 合作進行。Ubitus 為本項目提供寶貴的技術支持和計算資源。
Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable technical support and compute resources for the project. |
ashtrevi/flan-t5-base-idn-gen-ext-exp | ashtrevi | 2023-11-09T14:54:42Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"region:us"
]
| null | 2023-11-08T18:59:14Z | ---
library_name: peft
base_model: google/flan-t5-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0.dev0
|
JunghwanRo/dqn-SpaceInvadersNoFrameskip-v4 | JunghwanRo | 2023-11-09T14:51:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T13:24:52Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.50 +/- 196.65
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga JunghwanRo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga JunghwanRo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga JunghwanRo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Eswann/q-FrozenLake-v1-4x4-noSlippery | Eswann | 2023-11-09T14:49:52Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T14:49:50Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Eswann/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sultan/ArabicT5-49GB-small | sultan | 2023-11-09T14:46:18Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"arxiv:2109.10686",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-23T16:24:58Z | # ArabicT5: Efficient Adaptation of T5 on Arabic Language
# Model Description
This model adapts T5 on the Arabic Language by pre-training T5 on :
- Arabic Wikipedia.
- Marefa encyclopedia.
- Hindawi Books.
- a collection of Arabic News.
- OSCAR Dataset (32GB)
Total Corpora size is 49GB. This model uses an efficient implementation of T5 which reduces the fine-tuning and memory used [Link](https://arxiv.org/abs/2109.10686) and uses T5x for pre-training [Link](https://github.com/google-research/t5x)
## Pre-training Settings and Results on TyDi QA Development Dataset ( Model in this card is highlighted in bold )
| Model | Hidden Layer | Atten. head | Atten. Layers | Vocab | Hardware |Training Steps | Batch | Train x Batch Factor |Corpora |
|------------------|--------------|-------------|---------------|-------|-----------|---------------|--------|-----------------------|------------------------|
| AraT5-base | 768 | 12 | 12 | 110K |TPUv3-8 | 1M | 128 | 1.0x |248GB 29B tokens (MSA + Tweets) |
| AraT5-msa-base | 768 | 12 | 12 | 110K |TPUv3-8 | 1M | 128 | 1.0x |70GB (MSA) |
| AraT5-tweets-base| 768 | 12 | 12 | 110K |TPUv3-8 | 1M | 128 | 1.0x |178GB (Tweets) |
| AraBART-base | 768 | 12 | 12 | 50K | 128 V100 GPUs (60h) |25 epochs| - | - |73GB (MSA) |
| mT5-base | 768 | 12 | 12 | 250K |TPUv3-32 | 1M | 1024 | 8.0x |6.3T tokens (mC4)|
| ArabicT5-17GB-small | 512 | 8 | 20 | 32K |TPUv3-32 | 256K | 256 | 0.5x |17GB (MSA) |
| ArabicT5-49GB-small | 512 | 8 | 16 | 32K |TPUv3-64 | 500K | 256 | 1.0x |49GB (MSA + OSCAR) |
| ArabicT5-17GB-base | 768 | 12 | 16 | 32K |TPUv3-128 | 500K | 512 | 2.0x |17GB (MSA) |
| ArabicT5-49GB-base | 768 | 12 | 16 | 32K |TPUv3-64 | 500K | 256 | 1.0x |49GB (MSA + OSCAR) |
| ArabicT5-17GB-large | 768 | 12 | 36 | 32K |TPUv3-128 | 500K | 512 | 2.0x |17GB (MSA) |
## Results on TyDi QA, HARD, Sentiment Analysis, Sarcasm Detection ( Best Score is highlighted in bold )
| Model Type | Model | <center>TyDi QA| <center>HARD| <center>ArSarcasm-v2-Sentiment| <center>ArSarcasm-v2-Sarcasm| XL-SUM |
|--------------|------------------------|---------------------|----------------|-----------------|------------|------------|
| Generative | AraT5-base | <center>70.4/84.2 |<center>96.5|<center>69.7/72.6|<center>60.4|<center>30.3|
| Generative | AraT5-msa-base | <center>70.9/84.0 |<center>96.5|<center>70.0/72.7|<center>60.7|<center>27.4|
| Generative | AraT5-tweets-base | <center>65.1/79.0 |<center>96.3|<center>70.7/73.5|<center>61.1|<center>25.1|
| Generative | mT5-base | <center>72.2/84.1 |<center>96.2|<center>67.3/68.8|<center>52.2|<center>25.7|
| Generative | AraBART-base | <center>48.8/71.2 |<center>96.1|<center>66.2/68.2|<center>56.3|<center>31.2|
| Generative | ArabicT5-17GB-small | <center>70.8/84.8 |<center>96.4|<center>68.9/71.2|<center>58.9|<center>29.2|
| Generative | ArabicT5-49GB-small | <center>72.4/85.1 |<center>96.4|<center>70.2/73.4|<center>61.0|<center>30.2|
| Generative | ArabicT5-17GB-base | <center>73.3/86.1 |<center>96.4|<center>70.4/73.0|<center>59.8|<center>30.3|
| Generative | ArabicT5-49GB-base | <center>72.1/85.1 |<center>96.5|<center>71.3/74.1|<center>60.4|<center>30.9|
| Generative | ArabicT5-17GB-large | <center>**75.5/87.1** |<center>**96.5**| <center>**72.2/75.2**|<center>**61.7**|<center>**31.7**|
## ArabicT5 vs Extractive Arabic BERT-like Models
| Model Type | Model | <center>TyDi QA| <center>HARD| <center>ArSarcasm-v2-Sentiment| <center>ArSarcasm-v2-Sarcasm| XL-SUM |
|--------------|------------------------|---------------------|----------------|-----------------|------------|------------|
| Generative | ArabicT5-17GB-small | <center>70.8/84.8 |<center>96.4|<center>68.9/71.2|<center>58.9|<center>29.2|
| Generative | ArabicT5-49GB-small | <center>72.4/85.1 |<center>96.4|<center>70.2/73.4|<center>61.0|<center>30.2|
| Generative | ArabicT5-17GB-base | <center>73.3/86.1 |<center>96.4|<center>70.4/73.0|<center>59.8|<center>30.3|
| Generative | ArabicT5-49GB-base | <center>72.1/85.1 |<center>96.5|<center>71.3/74.1|<center>60.4|<center>30.9|
| Generative | ArabicT5-17GB-large | <center>75.5/87.1 |<center>96.5| <center>72.2/75.2|<center>61.7|<center>31.7|
| Exctractive | AraBERTv02-Large | <center>73.7/86.0 |<center>96.4|<center>69.5/71.8|<center>-|<center> N/A|
| Exctractive | AraBERTv2-Large | <center>64.5/82.2 |<center>96.5|<center>70.0/72.4|<center>-|<center> N/A|
| Exctractive | AraELECTRA-base | <center>74.9/86.7 |<center>96.4|<center>69.6/72.3|<center>-|<center>N/A|
| Exctractive | ArabicTransformer-base | <center>**75.4/87.2** |<center>**96.6**|<center>70.8/74.0|<center>-|<center> N/A|
Evaluation Metrics: TyDi QA (EM/F1), HARD (Accuracy), Sentiment Analysis (Accuracy / F1-PN positive-negative), Sarcasm Detection (F1-sarcastic), XL-SUM (Rouge-L with Stemmer).
You can download the full details of our grid search for all models in all tasks above from this link: https://github.com/salrowili/ArabicT5/raw/main/ArabicT5_Grid_Search.zip
For the XL-Sum task, we choose our best run for each model using the eval set. We use the official evaluation script from XL-Sum, which uses the stemmer function, which may show better results than papers that don't use the stemmer function. The official XL-Sum paper uses a stemmer function.
Reported numbers for extractive models is taken from ArabicTransformer paper --> https://aclanthology.org/2021.findings-emnlp.108/
# FineTuning our efficient ArabicT5-49GB-Small model with Torch on 3070 laptop GPU ###
[![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/ArabicT5/blob/main/ArabicT5_49GB_Small_on_3070_Laptop_GPU.ipynb)
If you are running your code on a laptop GPU (e.g., a gaming laptop) or limited GPU memory, we recommended using our ArabicT5-49GB-Small model, which was the only model from the list that we were able to run on 3070 Laptop card with a batch size of 8. We manage to achieve an F1 score of 85.391 (slightly better than our FLAX code ) on the TyDi QA task.
# FineTuning our ArabicT5 model on generative and abstractive tasks with FLAX ###
[![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/ArabicT5/blob/main/FineTuning_ArabicT5_with_FLAX_and_TPU.ipynb)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# FineTuning ArabicT5 on TPUv3-8 with free Kaggle ###
https://www.kaggle.com/code/sultanalrowili/arabict5-on-tydi-with-free-tpuv3-8-with-kaggle
# Continual Pre-Training of ArabicT5 with T5x
if you want to continue pre-training ArabicT5 on your own data, we have uploaded the raw t5x checkpoint to this link https://huggingface.co/sultan/ArabicT5-49GB-base/blob/main/arabict5_49GB_base_t5x.tar.gz
We will soon share a tutorial on how you can do that for free with Kaggle TPU
## GitHub Page
https://github.com/salrowili/ArabicT5
# Acknowledgment
We want to acknowledge the support we have from The TPU Research Cloud (TRC) team to grant us access to TPUv3 units.
# Paper
[Generative Approach for Gender-Rewriting Task with ArabicT5](https://aclanthology.org/2022.wanlp-1.55/)
# Citation
```bibtex
@inproceedings{alrowili-shanker-2022-generative,
title = "Generative Approach for Gender-Rewriting Task with {A}rabic{T}5",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wanlp-1.55",
pages = "491--495",
abstract = "Addressing the correct gender in generative tasks (e.g., Machine Translation) has been an overlooked issue in the Arabic NLP. However, the recent introduction of the Arabic Parallel Gender Corpus (APGC) dataset has established new baselines for the Arabic Gender Rewriting task. To address the Gender Rewriting task, we first pre-train our new Seq2Seq ArabicT5 model on a 17GB of Arabic Corpora. Then, we continue pre-training our ArabicT5 model on the APGC dataset using a newly proposed method. Our evaluation shows that our ArabicT5 model, when trained on the APGC dataset, achieved competitive results against existing state-of-the-art methods. In addition, our ArabicT5 model shows better results on the APGC dataset compared to other Arabic and multilingual T5 models.",
}
``` |
acrenn/ppo-Huggy | acrenn | 2023-11-09T14:44:38Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-11-09T13:41:53Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: acrenn/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
blossominkyung/dqn-SpaceInvadersNoFrameskip-v4 | blossominkyung | 2023-11-09T14:41:06Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T14:40:24Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 648.00 +/- 342.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga blossominkyung -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga blossominkyung -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga blossominkyung
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
machinelearningzuu/phi-1_5-finetuned-sql-injection | machinelearningzuu | 2023-11-09T14:34:00Z | 19 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mixformer-sequential",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-11-09T12:46:53Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-sql-injection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-sql-injection
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ayoub999/LayoutLMv3_5_entities_filtred_14 | ayoub999 | 2023-11-09T14:30:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-11-03T13:06:37Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: LayoutLMv3_5_entities_filtred_14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LayoutLMv3_5_entities_filtred_14
This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
LazzeKappa/L06 | LazzeKappa | 2023-11-09T14:28:50Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-11-02T16:35:12Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: L06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# L06
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4306 | 1.0 | 92 | 0.4169 |
| 0.3924 | 2.0 | 184 | 0.4043 |
| 0.3683 | 3.0 | 276 | 0.4009 |
| 0.3561 | 4.0 | 368 | 0.4009 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
arieg/spec_cls_80_v2 | arieg | 2023-11-09T14:18:46Z | 5 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-09T14:02:08Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: arieg/spec_cls_80_v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arieg/spec_cls_80_v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0698
- Validation Loss: 1.0517
- Train Accuracy: 1.0
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 14400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 4.2243 | 4.0115 | 0.575 | 0 |
| 3.6964 | 3.4678 | 0.9125 | 1 |
| 3.1703 | 2.9932 | 0.9938 | 2 |
| 2.7155 | 2.5826 | 0.9938 | 3 |
| 2.3313 | 2.2229 | 1.0 | 4 |
| 2.0025 | 1.9208 | 1.0 | 5 |
| 1.7153 | 1.6639 | 1.0 | 6 |
| 1.4721 | 1.4462 | 1.0 | 7 |
| 1.2586 | 1.2279 | 1.0 | 8 |
| 1.0698 | 1.0517 | 1.0 | 9 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
piecurus/detr-resnet-50_cppe5 | piecurus | 2023-11-09T14:06:46Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-11-09T13:46:55Z | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
dacorvo/llama-test-upload-file-by-file | dacorvo | 2023-11-09T13:58:41Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"inferentia2",
"neuron",
"en",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-11-09T13:56:27Z | ---
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- inferentia2
- neuron
---
# Neuronx model for [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
This repository contains [**AWS Inferentia2**](https://aws.amazon.com/ec2/instance-types/inf2/) and [`neuronx`](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/) compatible checkpoints for [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
You can find detailed information about the base model on its [Model Card](https://huggingface.co/meta-llama/Llama-2-7b-hf).
This model has been exported to the `neuron` format using specific `input_shapes` and `compiler` parameters detailed in the paragraphs below.
Please refer to the 🤗 `optimum-neuron` [documentation](https://huggingface.co/docs/optimum-neuron/main/en/guides/models#configuring-the-export-of-a-generative-model) for an explanation of these parameters.
## Usage on Amazon SageMaker
_coming soon_
## Usage with 🤗 `optimum-neuron`
```python
>>> from optimum.neuron import pipeline
>>> p = pipeline('text-generation', 'aws-neuron/Llama-2-7b-hf-neuron-latency')
>>> p("My favorite place on earth is", max_new_tokens=64, do_sample=True, top_k=50)
[{'generated_text': 'My favorite place on earth is the ocean. It is where I feel most
at peace. I love to travel and see new places. I have a'}]
```
This repository contains tags specific to versions of `neuronx`. When using with 🤗 `optimum-neuron`, use the repo revision specific to the version of `neuronx` you are using, to load the right serialized checkpoints.
## Arguments passed during export
**input_shapes**
```json
{
"batch_size": 1,
"sequence_length": 2048,
}
```
**compiler_args**
```json
{
"auto_cast_type": "fp16",
"num_cores": 24,
}
```
|
1aurent/phikon-distil-vit-tiny-patch16-224-kather2016 | 1aurent | 2023-11-09T13:57:43Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"feature-extraction",
"biology",
"cancer",
"owkin",
"histology",
"dataset:1aurent/Kather-texture-2016",
"base_model:1aurent/phikon-finetuned-lora-kather2016",
"base_model:finetune:1aurent/phikon-finetuned-lora-kather2016",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-09T11:00:55Z | ---
library_name: transformers
base_model: 1aurent/phikon-finetuned-lora-kather2016
tags:
- feature-extraction
- image-classification
- biology
- cancer
- owkin
- histology
model-index:
- name: owkin_pancancer
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: 1aurent/Kather-texture-2016
type: image-classification
metrics:
- type: accuracy
value: 0.932
name: accuracy
verified: false
license: other
license_name: owkin-non-commercial
license_link: https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt
pipeline_tag: image-classification
datasets:
- 1aurent/Kather-texture-2016
metrics:
- accuracy
widget:
- src: >-
https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg
example_title: adipose
---
# Model card for phikon-distil-vit-tiny-patch16-224-kather2016
This model is a distilled version of [owkin/phikon](https://huggingface.co/owkin/phikon) to a TinyViT on the [1aurent/Kather-texture-2016](https://huggingface.co/datasets/1aurent/Kather-texture-2016) dataset.
## Model Usage
### Image Classification
```python
from transformers import AutoModelForImageClassification, AutoImageProcessor
from urllib.request import urlopen
from PIL import Image
# get example histology image
img = Image.open(
urlopen(
"https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg"
)
)
# load image_processor and model from the hub
model_name = "1aurent/phikon-distil-vit-tiny-patch16-224-kather2016"
image_processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModelForImageClassification.from_pretrained(model_name)
inputs = image_processor(img, return_tensors="pt")
outputs = model(**inputs)
```
## Citation
```bibtex
@article{Filiot2023.07.21.23292757,
author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
elocation-id = {2023.07.21.23292757},
year = {2023},
doi = {10.1101/2023.07.21.23292757},
publisher = {Cold Spring Harbor Laboratory Press},
url = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
journal = {medRxiv}
}
``` |
abhishek/zephyr-beta-math | abhishek | 2023-11-09T13:56:26Z | 1,509 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-27T08:53:16Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? |
1aurent/phikon-distil-mobilenet_v2-kather2016 | 1aurent | 2023-11-09T13:53:37Z | 34 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mobilenet_v2",
"image-classification",
"feature-extraction",
"biology",
"cancer",
"owkin",
"histology",
"dataset:1aurent/Kather-texture-2016",
"base_model:1aurent/phikon-finetuned-lora-kather2016",
"base_model:finetune:1aurent/phikon-finetuned-lora-kather2016",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-09T10:16:17Z | ---
library_name: transformers
base_model: 1aurent/phikon-finetuned-lora-kather2016
tags:
- feature-extraction
- image-classification
- biology
- cancer
- owkin
- histology
model-index:
- name: owkin_pancancer
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: 1aurent/Kather-texture-2016
type: image-classification
metrics:
- type: accuracy
value: 0.928
name: accuracy
verified: false
license: other
license_name: owkin-non-commercial
license_link: https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt
pipeline_tag: image-classification
datasets:
- 1aurent/Kather-texture-2016
metrics:
- accuracy
widget:
- src: >-
https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg
example_title: adipose
---
# Model card for phikon-distil-mobilenet_v2-kather2016
This model is a distilled version of [owkin/phikon](https://huggingface.co/owkin/phikon) to a MobileNet-v2 on the [1aurent/Kather-texture-2016](https://huggingface.co/datasets/1aurent/Kather-texture-2016) dataset.
## Model Usage
### Image Classification
```python
from transformers import AutoModelForImageClassification, AutoImageProcessor
from urllib.request import urlopen
from PIL import Image
# get example histology image
img = Image.open(
urlopen(
"https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg"
)
)
# load image_processor and model from the hub
model_name = "1aurent/phikon-distil-mobilenet_v2-kather2016"
image_processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModelForImageClassification.from_pretrained(model_name)
inputs = image_processor(img, return_tensors="pt")
outputs = model(**inputs)
```
## Citation
```bibtex
@article{Filiot2023.07.21.23292757,
author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
elocation-id = {2023.07.21.23292757},
year = {2023},
doi = {10.1101/2023.07.21.23292757},
publisher = {Cold Spring Harbor Laboratory Press},
url = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
journal = {medRxiv}
}
``` |
dacorvo/llama-test-upload-folder | dacorvo | 2023-11-09T13:52:53Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"inferentia2",
"neuron",
"en",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-11-09T13:50:33Z | ---
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- inferentia2
- neuron
---
# Neuronx model for [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
This repository contains [**AWS Inferentia2**](https://aws.amazon.com/ec2/instance-types/inf2/) and [`neuronx`](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/) compatible checkpoints for [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
You can find detailed information about the base model on its [Model Card](https://huggingface.co/meta-llama/Llama-2-7b-hf).
This model has been exported to the `neuron` format using specific `input_shapes` and `compiler` parameters detailed in the paragraphs below.
Please refer to the 🤗 `optimum-neuron` [documentation](https://huggingface.co/docs/optimum-neuron/main/en/guides/models#configuring-the-export-of-a-generative-model) for an explanation of these parameters.
## Usage on Amazon SageMaker
_coming soon_
## Usage with 🤗 `optimum-neuron`
```python
>>> from optimum.neuron import pipeline
>>> p = pipeline('text-generation', 'aws-neuron/Llama-2-7b-hf-neuron-latency')
>>> p("My favorite place on earth is", max_new_tokens=64, do_sample=True, top_k=50)
[{'generated_text': 'My favorite place on earth is the ocean. It is where I feel most
at peace. I love to travel and see new places. I have a'}]
```
This repository contains tags specific to versions of `neuronx`. When using with 🤗 `optimum-neuron`, use the repo revision specific to the version of `neuronx` you are using, to load the right serialized checkpoints.
## Arguments passed during export
**input_shapes**
```json
{
"batch_size": 1,
"sequence_length": 2048,
}
```
**compiler_args**
```json
{
"auto_cast_type": "fp16",
"num_cores": 24,
}
```
|
openai/consistency-decoder | openai | 2023-11-09T13:51:12Z | 215 | 48 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"license:mit",
"region:us"
]
| null | 2023-11-09T10:50:49Z | ---
library_name: diffusers
tags:
- stable-diffusion
license: mit
---
## Consistency Decoder
This is a decoder that can be used to improve decoding for Stable Diffusion VAEs. To know more, refer to the [DALL-E 3 technical report](https://cdn.openai.com/papers/dall-e-3.pdf).
To original code repository can be found [here](https://github.com/openai/consistencydecoder).
## Usage in 🧨 diffusers
```python
import torch
from diffusers import DiffusionPipeline, ConsistencyDecoderVAE
vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=pipe.torch_dtype)
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16
).to("cuda")
pipe("horse", generator=torch.manual_seed(0)).images
```
## Results
_(Taken from the original [code repository](https://github.com/openai/consistencydecoder))_
## Examples
Original Image | GAN Decoder | Consistency Decoder |
:---:|:---:|:---:|
 |  |  |
 |  |  |
 |  |  |
|
ron5569/lamma2_7b_v1 | ron5569 | 2023-11-09T13:49:26Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-11-09T10:26:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
Ginger1704/q-Taxi-v3 | Ginger1704 | 2023-11-09T13:44:30Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T12:51:08Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.91
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Ginger1704/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
1aurent/phikon-finetuned-lora-kather2016 | 1aurent | 2023-11-09T13:42:05Z | 2 | 1 | peft | [
"peft",
"safetensors",
"feature-extraction",
"image-classification",
"biology",
"cancer",
"owkin",
"histology",
"dataset:1aurent/Kather-texture-2016",
"base_model:owkin/phikon",
"base_model:adapter:owkin/phikon",
"license:other",
"model-index",
"region:us"
]
| image-classification | 2023-11-08T19:42:56Z | ---
library_name: peft
base_model: owkin/phikon
tags:
- feature-extraction
- image-classification
- biology
- cancer
- owkin
- histology
model-index:
- name: owkin_pancancer
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: 1aurent/Kather-texture-2016
type: image-classification
metrics:
- type: accuracy
value: 0.99
name: accuracy
verified: false
license: other
license_name: owkin-non-commercial
license_link: https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt
pipeline_tag: image-classification
datasets:
- 1aurent/Kather-texture-2016
metrics:
- accuracy
widget:
- src: https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg
example_title: adipose
---
# Model card for phikon-finetuned-lora-kather2016
This model is a fine-tuned version of [owkin/phikon](https://huggingface.co/owkin/phikon) on the [1aurent/Kather-texture-2016](https://huggingface.co/datasets/1aurent/Kather-texture-2016) dataset.
## Model Usage
### Image Classification
```python
from transformers import AutoModelForImageClassification, AutoImageProcessor
from peft import PeftConfig, PeftModel
from urllib.request import urlopen
from PIL import Image
# get example histology image
img = Image.open(
urlopen(
"https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg"
)
)
# load config, image_processor, base_model and lora_model from the hub
model_name = "1aurent/phikon-finetuned-lora-kather2016"
config = PeftConfig.from_pretrained(
pretrained_model_name_or_path=model_name
)
image_processor = AutoImageProcessor.from_pretrained(
pretrained_model_name_or_path=config.base_model_name_or_path
)
model = AutoModelForImageClassification.from_pretrained(
pretrained_model_name_or_path=config.base_model_name_or_path,
num_labels=8,
)
lora_model = PeftModel.from_pretrained(
model=model,
model_id=model_name
)
inputs = image_processor(img, return_tensors="pt")
outputs = lora_model(**inputs)
```
## Citation
```bibtex
@article{Filiot2023.07.21.23292757,
author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
elocation-id = {2023.07.21.23292757},
year = {2023},
doi = {10.1101/2023.07.21.23292757},
publisher = {Cold Spring Harbor Laboratory Press},
url = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
journal = {medRxiv}
}
``` |
Txinplas/Reinforce-m1 | Txinplas | 2023-11-09T13:25:13Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T13:25:00Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-m1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Jasonaron/ppo-LunarLander-v2 | Jasonaron | 2023-11-09T13:18:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T13:17:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.92 +/- 20.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ankekat1000/deliberative-bert-german | ankekat1000 | 2023-11-09T13:13:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-09T12:41:49Z | ---
license: cc-by-nc-sa-4.0
language:
- de
---
## Model description
This model is a fine-tuned version of the [bert-base-german-cased model by deepset](https://huggingface.co/bert-base-german-cased) to classify German-language deliberative comments.
## How to use
You can use the model with the following code.
```python
#!pip install transformers
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
model_path = "ankekat1000/deliberative-bert-german"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline('Tolle Idee. Ich denke, dass dieses Projekt Teil des Stadtforums werden sollte, damit wir darüber weiter nachdenken können!'))
```
## Training
The pre-trained model [bert-base-german-cased model by deepset](https://huggingface.co/bert-base-german-cased) was fine-tuned on a crowd-annotated data set of 14,000 user comments that has been labeled for deliberation in a binary classification task.
As deliberative, we defined comments that are enriching and valuble to a deliberative discussion in whole or in part, such as comments that add arguments, suggestions, or new perspectives to the discussion, or otherwise help users find them stimulating or appreciative.
**Language model:** bert-base-cased (~ 12GB)
**Language:** German
**Labels:** Engaging (binary classification)
**Training data:** User comments posted to websites and facebook pages of German news media, user comments posted to online participation platforms (~ 14,000)
**Labeling procedure:** Crowd annotation
**Batch size:** 32
**Epochs:** 4
**Max. tokens length:** 512
**Infrastructure**: 1x Quadro RTX 8000
**Published**: Oct 24th, 2023
## Evaluation results
**Accuracy:**: 86%
**Macro avg. f1:**: 86%
| Label | Precision | Recall | F1 | Nr. comments in test set |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| not deliberative | 0.87 | 0.84 | 0.86 | 701 |
| deliberative | 0.84 | 0.87 | 0.85 | 667 |
|
zklee98/segformer-b1-solarModuleAnomaly-v0.1 | zklee98 | 2023-11-09T13:13:03Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2023-04-14T08:24:58Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b1-solarModuleAnomaly-v0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b1-solarModuleAnomaly-v0.1
This model is a fine-tuned version of [nvidia/mit-b1](https://huggingface.co/nvidia/mit-b1) on the zklee98/solarModuleAnomaly dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1547
- Mean Iou: 0.3822
- Mean Accuracy: 0.7643
- Overall Accuracy: 0.7643
- Accuracy Unlabelled: nan
- Accuracy Anomaly: 0.7643
- Iou Unlabelled: 0.0
- Iou Anomaly: 0.7643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabelled | Accuracy Anomaly | Iou Unlabelled | Iou Anomaly |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:----------------:|:--------------:|:-----------:|
| 0.4699 | 0.4 | 20 | 0.6337 | 0.4581 | 0.9162 | 0.9162 | nan | 0.9162 | 0.0 | 0.9162 |
| 0.3129 | 0.8 | 40 | 0.4636 | 0.3704 | 0.7407 | 0.7407 | nan | 0.7407 | 0.0 | 0.7407 |
| 0.2732 | 1.2 | 60 | 0.3164 | 0.3867 | 0.7734 | 0.7734 | nan | 0.7734 | 0.0 | 0.7734 |
| 0.2653 | 1.6 | 80 | 0.3769 | 0.4090 | 0.8180 | 0.8180 | nan | 0.8180 | 0.0 | 0.8180 |
| 0.2232 | 2.0 | 100 | 0.2976 | 0.2479 | 0.4958 | 0.4958 | nan | 0.4958 | 0.0 | 0.4958 |
| 0.5305 | 2.4 | 120 | 0.3151 | 0.3807 | 0.7613 | 0.7613 | nan | 0.7613 | 0.0 | 0.7613 |
| 0.2423 | 2.8 | 140 | 0.3189 | 0.4152 | 0.8305 | 0.8305 | nan | 0.8305 | 0.0 | 0.8305 |
| 0.3341 | 3.2 | 160 | 0.2384 | 0.3861 | 0.7723 | 0.7723 | nan | 0.7723 | 0.0 | 0.7723 |
| 0.2146 | 3.6 | 180 | 0.3200 | 0.4621 | 0.9243 | 0.9243 | nan | 0.9243 | 0.0 | 0.9243 |
| 0.1866 | 4.0 | 200 | 0.2510 | 0.3646 | 0.7291 | 0.7291 | nan | 0.7291 | 0.0 | 0.7291 |
| 0.2861 | 4.4 | 220 | 0.2736 | 0.4202 | 0.8404 | 0.8404 | nan | 0.8404 | 0.0 | 0.8404 |
| 0.2048 | 4.8 | 240 | 0.2410 | 0.3912 | 0.7823 | 0.7823 | nan | 0.7823 | 0.0 | 0.7823 |
| 0.1604 | 5.2 | 260 | 0.2233 | 0.3672 | 0.7344 | 0.7344 | nan | 0.7344 | 0.0 | 0.7344 |
| 0.2756 | 5.6 | 280 | 0.2705 | 0.4494 | 0.8987 | 0.8987 | nan | 0.8987 | 0.0 | 0.8987 |
| 0.1859 | 6.0 | 300 | 0.2211 | 0.4045 | 0.8089 | 0.8089 | nan | 0.8089 | 0.0 | 0.8089 |
| 0.1306 | 6.4 | 320 | 0.2140 | 0.3763 | 0.7525 | 0.7525 | nan | 0.7525 | 0.0 | 0.7525 |
| 0.5508 | 6.8 | 340 | 0.2231 | 0.4185 | 0.8371 | 0.8371 | nan | 0.8371 | 0.0 | 0.8371 |
| 0.1446 | 7.2 | 360 | 0.2139 | 0.3666 | 0.7332 | 0.7332 | nan | 0.7332 | 0.0 | 0.7332 |
| 0.3275 | 7.6 | 380 | 0.2470 | 0.3964 | 0.7928 | 0.7928 | nan | 0.7928 | 0.0 | 0.7928 |
| 0.164 | 8.0 | 400 | 0.2017 | 0.3910 | 0.7819 | 0.7819 | nan | 0.7819 | 0.0 | 0.7819 |
| 0.1864 | 8.4 | 420 | 0.2307 | 0.4408 | 0.8816 | 0.8816 | nan | 0.8816 | 0.0 | 0.8816 |
| 0.1578 | 8.8 | 440 | 0.1869 | 0.3707 | 0.7414 | 0.7414 | nan | 0.7414 | 0.0 | 0.7414 |
| 0.1201 | 9.2 | 460 | 0.2115 | 0.3834 | 0.7667 | 0.7667 | nan | 0.7667 | 0.0 | 0.7667 |
| 0.1783 | 9.6 | 480 | 0.2009 | 0.3747 | 0.7495 | 0.7495 | nan | 0.7495 | 0.0 | 0.7495 |
| 0.1232 | 10.0 | 500 | 0.1797 | 0.3865 | 0.7729 | 0.7729 | nan | 0.7729 | 0.0 | 0.7729 |
| 0.2572 | 10.4 | 520 | 0.1983 | 0.4057 | 0.8115 | 0.8115 | nan | 0.8115 | 0.0 | 0.8115 |
| 0.1209 | 10.8 | 540 | 0.1607 | 0.4274 | 0.8547 | 0.8547 | nan | 0.8547 | 0.0 | 0.8547 |
| 0.1234 | 11.2 | 560 | 0.2260 | 0.4066 | 0.8133 | 0.8133 | nan | 0.8133 | 0.0 | 0.8133 |
| 0.145 | 11.6 | 580 | 0.1963 | 0.3939 | 0.7878 | 0.7878 | nan | 0.7878 | 0.0 | 0.7878 |
| 0.0665 | 12.0 | 600 | 0.1912 | 0.3873 | 0.7747 | 0.7747 | nan | 0.7747 | 0.0 | 0.7747 |
| 0.0826 | 12.4 | 620 | 0.2095 | 0.4186 | 0.8373 | 0.8373 | nan | 0.8373 | 0.0 | 0.8373 |
| 0.1212 | 12.8 | 640 | 0.1732 | 0.4059 | 0.8118 | 0.8118 | nan | 0.8118 | 0.0 | 0.8118 |
| 0.142 | 13.2 | 660 | 0.2086 | 0.4007 | 0.8013 | 0.8013 | nan | 0.8013 | 0.0 | 0.8013 |
| 0.0899 | 13.6 | 680 | 0.1838 | 0.3928 | 0.7856 | 0.7856 | nan | 0.7856 | 0.0 | 0.7856 |
| 0.1857 | 14.0 | 700 | 0.1638 | 0.4157 | 0.8315 | 0.8315 | nan | 0.8315 | 0.0 | 0.8315 |
| 0.0788 | 14.4 | 720 | 0.1736 | 0.4112 | 0.8223 | 0.8223 | nan | 0.8223 | 0.0 | 0.8223 |
| 0.2543 | 14.8 | 740 | 0.1547 | 0.3822 | 0.7643 | 0.7643 | nan | 0.7643 | 0.0 | 0.7643 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
martyyz/vit-base-patch16-224-finetuned-flower | martyyz | 2023-11-09T13:11:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-09T13:00:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
livingbox/livingroom-02 | livingbox | 2023-11-09T13:02:48Z | 0 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-09T12:51:32Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Livingroom_02 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Tanver13/ppo-LunarLander-v2 | Tanver13 | 2023-11-09T13:01:26Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T11:10:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 24.08 +/- 106.49
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
danielcfox/my_awesome_model | danielcfox | 2023-11-09T13:01:09Z | 2 | 0 | transformers | [
"transformers",
"tf",
"bert",
"multiple-choice",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| multiple-choice | 2023-11-09T10:47:35Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: danielcfox/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# danielcfox/my_awesome_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3450
- Validation Loss: 0.5650
- Train Accuracy: 0.7961
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.8194 | 0.5742 | 0.7804 | 0 |
| 0.3450 | 0.5650 | 0.7961 | 1 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ryantaw/bert-small-finetuned | ryantaw | 2023-11-09T12:55:19Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-small",
"base_model:finetune:prajjwal1/bert-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-09T03:00:58Z | ---
license: mit
base_model: prajjwal1/bert-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-small-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned
This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0048
- Accuracy: 0.6038
- F1 Score: 0.6018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 86
- eval_batch_size: 86
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 1.3167 | 1.0 | 18 | 1.2414 | 0.4151 | 0.3857 |
| 1.1845 | 2.0 | 36 | 1.1500 | 0.5148 | 0.5148 |
| 1.0823 | 3.0 | 54 | 1.0743 | 0.5499 | 0.5543 |
| 0.995 | 4.0 | 72 | 1.0359 | 0.5553 | 0.5529 |
| 0.9242 | 5.0 | 90 | 1.0195 | 0.5849 | 0.5781 |
| 0.8742 | 6.0 | 108 | 1.0028 | 0.5741 | 0.5758 |
| 0.8237 | 7.0 | 126 | 1.0033 | 0.5930 | 0.5901 |
| 0.7893 | 8.0 | 144 | 0.9967 | 0.5930 | 0.5922 |
| 0.7332 | 9.0 | 162 | 1.0088 | 0.5957 | 0.5924 |
| 0.6997 | 10.0 | 180 | 1.0048 | 0.6038 | 0.6018 |
| 0.6836 | 11.0 | 198 | 1.0120 | 0.6011 | 0.5981 |
| 0.6571 | 12.0 | 216 | 1.0084 | 0.5849 | 0.5864 |
| 0.6253 | 13.0 | 234 | 1.0167 | 0.5903 | 0.5938 |
| 0.5902 | 14.0 | 252 | 1.0184 | 0.5930 | 0.5965 |
| 0.5766 | 15.0 | 270 | 1.0340 | 0.5930 | 0.5925 |
| 0.5591 | 16.0 | 288 | 1.0399 | 0.5930 | 0.5931 |
| 0.5353 | 17.0 | 306 | 1.0364 | 0.5930 | 0.5944 |
| 0.5205 | 18.0 | 324 | 1.0412 | 0.5876 | 0.5889 |
| 0.5197 | 19.0 | 342 | 1.0410 | 0.5849 | 0.5867 |
| 0.5222 | 20.0 | 360 | 1.0418 | 0.5984 | 0.5990 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
linearch/bert-finetuned-ner | linearch | 2023-11-09T12:54:59Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-11-08T11:06:26Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9299373557533795
- name: Recall
type: recall
value: 0.9493436553349041
- name: F1
type: f1
value: 0.9395403064623584
- name: Accuracy
type: accuracy
value: 0.9863130629304763
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.9299
- Recall: 0.9493
- F1: 0.9395
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2268 | 1.0 | 878 | nan | 0.9016 | 0.9362 | 0.9186 | 0.9820 |
| 0.0462 | 2.0 | 1756 | nan | 0.9283 | 0.9482 | 0.9381 | 0.9860 |
| 0.0248 | 3.0 | 2634 | nan | 0.9299 | 0.9493 | 0.9395 | 0.9863 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
TinyPixel/ds-guanaco | TinyPixel | 2023-11-09T12:43:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-base",
"region:us"
]
| null | 2023-11-09T11:09:51Z | ---
library_name: peft
base_model: deepseek-ai/deepseek-coder-1.3b-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.7.0.dev0
|
Jimi11/my_ner_model | Jimi11 | 2023-11-09T12:41:06Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-11-09T12:39:47Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_ner_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.4851657940663176
- name: Recall
type: recall
value: 0.25764596848934196
- name: F1
type: f1
value: 0.3365617433414043
- name: Accuracy
type: accuracy
value: 0.9386943696293446
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_ner_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2828
- Precision: 0.4852
- Recall: 0.2576
- F1: 0.3366
- Accuracy: 0.9387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.3005 | 0.3972 | 0.1594 | 0.2275 | 0.9347 |
| No log | 2.0 | 426 | 0.2828 | 0.4852 | 0.2576 | 0.3366 | 0.9387 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
SuperDan/PPO-LunarLander-v2 | SuperDan | 2023-11-09T12:30:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T12:30:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.48 +/- 12.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tanor/sr_BERTicovo_sr_set_ud | Tanor | 2023-11-09T12:28:44Z | 0 | 1 | spacy | [
"spacy",
"token-classification",
"sr",
"license:cc-by-sa-3.0",
"model-index",
"region:us"
]
| token-classification | 2023-11-09T12:23:03Z | ---
tags:
- spacy
- token-classification
language:
- sr
license: cc-by-sa-3.0
model-index:
- name: sr_BERTicovo_sr_set_ud
results:
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9579719813
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9859072715
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9613476212
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9392688925
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9365132684
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.9056712408
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9739776952
---
Spacy model for Serbian language based on Universal Dependencies trained on sr_set-ud corpus
| Feature | Description |
| --- | --- |
| **Name** | `sr_BERTicovo_sr_set_ud` |
| **Version** | `1.0.0` |
| **spaCy** | `>=3.7.2,<3.8.0` |
| **Default Pipeline** | `transformer`, `morphologizer`, `tagger`, `trainable_lemmatizer`, `parser` |
| **Components** | `transformer`, `morphologizer`, `tagger`, `trainable_lemmatizer`, `parser` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `CC BY-SA 3.0` |
| **Author** | [Petalinkar Saša]() |
### Label Scheme
<details>
<summary>View label scheme (1282 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|POS=ADP`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=CCONJ`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `NumType=Card\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Foreign=Yes\|POS=X`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `NumType=Ord\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PART\|Polarity=Neg`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|POS=ADP`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `POS=PART`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADV\|Tense=Past\|VerbForm=Conv`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=ADV\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Degree=Cmp\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Pos\|POS=ADV\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Ins\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Pos\|POS=ADV\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=X`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=ADP`, `Degree=Sup\|POS=ADV`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `POS=ADV\|Tense=Pres\|VerbForm=Conv`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Mult\|POS=NUM`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Degree=Pos\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Degree=Sup\|POS=DET`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Ins\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Pos\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `POS=SYM`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `POS=INTJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Gen\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Ins\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `POS=ADV\|VerbForm=Part`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs` |
| **`tagger`** | `Agcfpay`, `Agcfpgy`, `Agcfpiy`, `Agcfply`, `Agcfpny`, `Agcfsay`, `Agcfsdy`, `Agcfsgy`, `Agcfsiy`, `Agcfsly`, `Agcfsny`, `Agcmpay`, `Agcmpgy`, `Agcmpiy`, `Agcmpny`, `Agcmsayn`, `Agcmsdy`, `Agcmsgy`, `Agcmsiy`, `Agcmsly`, `Agcmsny`, `Agcnply`, `Agcnpny`, `Agcnsay`, `Agcnsdy`, `Agcnsgy`, `Agcnsiy`, `Agcnsly`, `Agcnsny`, `Agpfpay`, `Agpfpdy`, `Agpfpgy`, `Agpfpiy`, `Agpfply`, `Agpfpny`, `Agpfsay`, `Agpfsdy`, `Agpfsgy`, `Agpfsiy`, `Agpfsly`, `Agpfsny`, `Agpmpay`, `Agpmpdy`, `Agpmpgy`, `Agpmpiy`, `Agpmply`, `Agpmpny`, `Agpmsann`, `Agpmsayn`, `Agpmsayy`, `Agpmsdy`, `Agpmsgn`, `Agpmsgy`, `Agpmsiy`, `Agpmsly`, `Agpmsnn`, `Agpmsny`, `Agpnpay`, `Agpnpdy`, `Agpnpgy`, `Agpnpiy`, `Agpnply`, `Agpnpny`, `Agpnsay`, `Agpnsdy`, `Agpnsgn`, `Agpnsgy`, `Agpnsiy`, `Agpnsly`, `Agpnsny`, `Agsfpay`, `Agsfpgy`, `Agsfpiy`, `Agsfpny`, `Agsfsay`, `Agsfsgy`, `Agsfsiy`, `Agsfsny`, `Agsmpay`, `Agsmpdy`, `Agsmpgy`, `Agsmply`, `Agsmpny`, `Agsmsayn`, `Agsmsayy`, `Agsmsgy`, `Agsmsiy`, `Agsmsly`, `Agsmsny`, `Agsnpay`, `Agsnpgy`, `Agsnpny`, `Agsnsgy`, `Agsnsiy`, `Agsnsny`, `Appfpay`, `Appfpdy`, `Appfpgy`, `Appfpiy`, `Appfply`, `Appfpny`, `Appfsay`, `Appfsgy`, `Appfsiy`, `Appfsly`, `Appfsny`, `Appmpay`, `Appmpgy`, `Appmpiy`, `Appmply`, `Appmpny`, `Appmsann`, `Appmsayn`, `Appmsayy`, `Appmsdy`, `Appmsgy`, `Appmsiy`, `Appmsly`, `Appmsnn`, `Appmsny`, `Appnpay`, `Appnpgy`, `Appnpiy`, `Appnpny`, `Appnsay`, `Appnsgy`, `Appnsiy`, `Appnsly`, `Appnsny`, `Aspfpay`, `Aspfpgy`, `Aspfply`, `Aspfpny`, `Aspfsay`, `Aspfsdy`, `Aspfsgy`, `Aspfsiy`, `Aspfsly`, `Aspfsny`, `Aspmpay`, `Aspmpgy`, `Aspmpny`, `Aspmsann`, `Aspmsayy`, `Aspmsdy`, `Aspmsgy`, `Aspmsiy`, `Aspmsly`, `Aspmsnn`, `Aspnpay`, `Aspnsay`, `Aspnsgy`, `Aspnsly`, `Aspnsny`, `Cc`, `Cs`, `I`, `Mdc`, `Mdm`, `Mdo`, `Mds`, `Mlc`, `Mlc--i`, `Mlcf-a`, `Mlcf-g`, `Mlcf-n`, `Mlcfpa`, `Mlcfpg`, `Mlcfsa`, `Mlcfsd`, `Mlcfsg`, `Mlcfsi`, `Mlcfsl`, `Mlcfsn`, `Mlcm-a`, `Mlcm-n`, `Mlcmpn`, `Mlcmsan`, `Mlcmsay`, `Mlcmsd`, `Mlcmsg`, `Mlcmsi`, `Mlcmsl`, `Mlcmsn`, `Mlcn-n`, `Mlcnsa`, `Mlcnsl`, `Mlcnsn`, `Mlofpa`, `Mlofpd`, `Mlofpg`, `Mlofpi`, `Mlofpl`, `Mlofpn`, `Mlofsa`, `Mlofsd`, `Mlofsg`, `Mlofsi`, `Mlofsl`, `Mlofsn`, `Mlompa`, `Mlompd`, `Mlompg`, `Mlompi`, `Mlompl`, `Mlompn`, `Mlomsan`, `Mlomsay`, `Mlomsd`, `Mlomsg`, `Mlomsi`, `Mlomsl`, `Mlomsn`, `Mlonpa`, `Mlonpg`, `Mlonpl`, `Mlonsa`, `Mlonsg`, `Mlonsi`, `Mlonsl`, `Mlonsn`, `Mls`, `Mlsf-a`, `Mlsf-g`, `Mlsf-i`, `Mlsf-l`, `Mlsf-n`, `Mlsm-a`, `Mlsm-n`, `Ncfpa`, `Ncfpd`, `Ncfpg`, `Ncfpi`, `Ncfpl`, `Ncfpn`, `Ncfsa`, `Ncfsd`, `Ncfsg`, `Ncfsi`, `Ncfsl`, `Ncfsn`, `Ncmpa`, `Ncmpd`, `Ncmpg`, `Ncmpi`, `Ncmpl`, `Ncmpn`, `Ncmsan`, `Ncmsay`, `Ncmsd`, `Ncmsg`, `Ncmsi`, `Ncmsl`, `Ncmsn`, `Ncmsv`, `Ncnpa`, `Ncnpd`, `Ncnpg`, `Ncnpi`, `Ncnpl`, `Ncnpn`, `Ncnsa`, `Ncnsd`, `Ncnsg`, `Ncnsi`, `Ncnsl`, `Ncnsn`, `Npfpd`, `Npfpg`, `Npfpn`, `Npfsa`, `Npfsd`, `Npfsg`, `Npfsi`, `Npfsl`, `Npfsn`, `Npmpa`, `Npmpd`, `Npmpg`, `Npmpi`, `Npmpn`, `Npmsan`, `Npmsay`, `Npmsd`, `Npmsg`, `Npmsi`, `Npmsl`, `Npmsn`, `Npnpn`, `Npnsa`, `Npnsd`, `Npnsg`, `Npnsi`, `Npnsl`, `Npnsn`, `Pd-fpa`, `Pd-fpd`, `Pd-fpg`, `Pd-fpi`, `Pd-fpl`, `Pd-fpn`, `Pd-fsa`, `Pd-fsd`, `Pd-fsg`, `Pd-fsi`, `Pd-fsl`, `Pd-fsn`, `Pd-mpa`, `Pd-mpd`, `Pd-mpg`, `Pd-mpl`, `Pd-mpn`, `Pd-msan`, `Pd-msay`, `Pd-msd`, `Pd-msg`, `Pd-msi`, `Pd-msl`, `Pd-msn`, `Pd-npa`, `Pd-npd`, `Pd-npg`, `Pd-npl`, `Pd-npn`, `Pd-nsa`, `Pd-nsd`, `Pd-nsg`, `Pd-nsi`, `Pd-nsl`, `Pd-nsn`, `Pi--sn`, `Pi-fpa`, `Pi-fpd`, `Pi-fpg`, `Pi-fpi`, `Pi-fpl`, `Pi-fpn`, `Pi-fsa`, `Pi-fsd`, `Pi-fsg`, `Pi-fsi`, `Pi-fsl`, `Pi-fsn`, `Pi-mpa`, `Pi-mpd`, `Pi-mpg`, `Pi-mpi`, `Pi-mpl`, `Pi-mpn`, `Pi-msan`, `Pi-msay`, `Pi-msd`, `Pi-msg`, `Pi-msi`, `Pi-msl`, `Pi-msn`, `Pi-npa`, `Pi-npd`, `Pi-npg`, `Pi-npi`, `Pi-npl`, `Pi-npn`, `Pi-nsa`, `Pi-nsg`, `Pi-nsi`, `Pi-nsl`, `Pi-nsn`, `Pi3m-a`, `Pi3m-d`, `Pi3m-g`, `Pi3m-n`, `Pi3n-a`, `Pi3n-g`, `Pi3n-i`, `Pi3n-l`, `Pi3n-n`, `Pp1-pa`, `Pp1-pd`, `Pp1-pg`, `Pp1-pi`, `Pp1-pl`, `Pp1-pn`, `Pp1-sa`, `Pp1-sd`, `Pp1-sn`, `Pp2-pa`, `Pp2-pd`, `Pp2-pl`, `Pp2-pn`, `Pp3-pa`, `Pp3-pd`, `Pp3-pg`, `Pp3-pi`, `Pp3-pl`, `Pp3fpn`, `Pp3fsa`, `Pp3fsd`, `Pp3fsg`, `Pp3fsi`, `Pp3fsl`, `Pp3fsn`, `Pp3mpn`, `Pp3msa`, `Pp3msd`, `Pp3msg`, `Pp3msi`, `Pp3msl`, `Pp3msn`, `Pp3npn`, `Pp3nsa`, `Pp3nsn`, `Pq-fpa`, `Pq-fsl`, `Pq-fsn`, `Pq-mpn`, `Pq-msn`, `Pq-nsn`, `Pq3n-n`, `Ps1fpa`, `Ps1fpd`, `Ps1fpg`, `Ps1fpn`, `Ps1fsa`, `Ps1fsd`, `Ps1fsg`, `Ps1fsl`, `Ps1fsn`, `Ps1mpa`, `Ps1mpd`, `Ps1mpg`, `Ps1mpl`, `Ps1mpn`, `Ps1msan`, `Ps1msd`, `Ps1msg`, `Ps1msl`, `Ps1msn`, `Ps1nsa`, `Ps1nsg`, `Ps1nsl`, `Ps1nsn`, `Ps2fpl`, `Ps2fpn`, `Ps2msan`, `Ps2nsl`, `Ps2nsn`, `Ps3fpa`, `Ps3fpg`, `Ps3fpl`, `Ps3fpn`, `Ps3fsa`, `Ps3fsd`, `Ps3fsg`, `Ps3fsi`, `Ps3fsl`, `Ps3fsn`, `Ps3mpa`, `Ps3mpd`, `Ps3mpg`, `Ps3mpl`, `Ps3mpn`, `Ps3msan`, `Ps3msd`, `Ps3msg`, `Ps3msi`, `Ps3msl`, `Ps3msn`, `Ps3npa`, `Ps3npg`, `Ps3npl`, `Ps3nsa`, `Ps3nsg`, `Ps3nsl`, `Ps3nsn`, `Px--sa`, `Px--sd`, `Px--sg`, `Px--si`, `Px--sl`, `Px-fpa`, `Px-fpg`, `Px-fpi`, `Px-fpl`, `Px-fsa`, `Px-fsd`, `Px-fsg`, `Px-fsi`, `Px-fsl`, `Px-mpa`, `Px-mpd`, `Px-mpg`, `Px-mpi`, `Px-mpl`, `Px-msan`, `Px-msay`, `Px-msd`, `Px-msg`, `Px-msi`, `Px-msl`, `Px-npa`, `Px-npg`, `Px-npl`, `Px-nsa`, `Px-nsg`, `Qo`, `Qq`, `Qz`, `Rgc`, `Rgp`, `Rgs`, `Rr`, `Sa`, `Sd`, `Sg`, `Si`, `Sl`, `Vaa1p`, `Vaa1s`, `Vaa3p`, `Vaa3s`, `Vaf1p`, `Vaf3p`, `Vaf3s`, `Van`, `Vap-pf`, `Vap-pm`, `Vap-pn`, `Vap-sf`, `Vap-sm`, `Vap-sn`, `Var1p`, `Var1s`, `Var2p`, `Var3p`, `Var3s`, `Vma3s`, `Vmf1p`, `Vmf1s`, `Vmf2p`, `Vmf3p`, `Vmf3s`, `Vmm1p`, `Vmm2p`, `Vmn`, `Vmp-pf`, `Vmp-pm`, `Vmp-pn`, `Vmp-sf`, `Vmp-sm`, `Vmp-sn`, `Vmr1p`, `Vmr1s`, `Vmr2p`, `Vmr3p`, `Vmr3s`, `X`, `Xf`, `Y`, `Z` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `conj`, `cop`, `csubj`, `dep`, `det`, `det:numgov`, `discourse`, `expl`, `fixed`, `flat`, `mark`, `nmod`, `nsubj`, `nummod`, `nummod:gov`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `POS_ACC` | 98.59 |
| `MORPH_ACC` | 96.13 |
| `TAG_ACC` | 95.80 |
| `LEMMA_ACC` | 93.93 |
| `DEP_UAS` | 93.65 |
| `DEP_LAS` | 90.57 |
| `SENTS_P` | 97.04 |
| `SENTS_R` | 97.76 |
| `SENTS_F` | 97.40 |
| `TRANSFORMER_LOSS` | 560364.46 |
| `MORPHOLOGIZER_LOSS` | 103638.68 |
| `TAGGER_LOSS` | 102948.12 |
| `TRAINABLE_LEMMATIZER_LOSS` | 91087.14 |
| `PARSER_LOSS` | 1420016.00 | |
LeKyks1/q-FrozenLake-v1-4x4-noSlippery | LeKyks1 | 2023-11-09T12:15:59Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T12:15:56Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.47 +/- 0.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="LeKyks1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
acrenn/ppo-LunarLander-v2 | acrenn | 2023-11-09T12:14:43Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T12:14:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.51 +/- 18.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sam-babayev/zzz | sam-babayev | 2023-11-09T12:08:20Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_title_body_jsonl",
"dataset:flax-sentence-embeddings/stackexchange_titlebody_best_voted_answer_jsonl",
"dataset:flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl",
"dataset:flax-sentence-embeddings/stackexchange_titlebody_best_and_down_voted_answer_jsonl",
"dataset:sentence-transformers/reddit-title-body",
"dataset:msmarco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"dataset:sentence-transformers/embedding-training-data",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
]
| sentence-similarity | 2023-11-09T12:07:54Z | ---
license: apache-2.0
pipeline_tag: sentence-similarity
inference: false
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
language: en
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_title_body_jsonl
- flax-sentence-embeddings/stackexchange_titlebody_best_voted_answer_jsonl
- flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl
- flax-sentence-embeddings/stackexchange_titlebody_best_and_down_voted_answer_jsonl
- sentence-transformers/reddit-title-body
- msmarco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
- sentence-transformers/embedding-training-data
model-index:
- name: lodestone-base-4096-v1
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 69.7313432835821
- type: ap
value: 31.618259511417733
- type: f1
value: 63.30313825394228
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 86.89837499999999
- type: ap
value: 82.39500885672128
- type: f1
value: 86.87317947399657
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.05
- type: f1
value: 42.67624383248947
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 40.976
- type: map_at_100
value: 42.067
- type: map_at_1000
value: 42.075
- type: map_at_3
value: 35.917
- type: map_at_5
value: 38.656
- type: mrr_at_1
value: 26.814
- type: mrr_at_10
value: 41.252
- type: mrr_at_100
value: 42.337
- type: mrr_at_1000
value: 42.345
- type: mrr_at_3
value: 36.226
- type: mrr_at_5
value: 38.914
- type: ndcg_at_1
value: 26.173999999999996
- type: ndcg_at_10
value: 49.819
- type: ndcg_at_100
value: 54.403999999999996
- type: ndcg_at_1000
value: 54.59
- type: ndcg_at_3
value: 39.231
- type: ndcg_at_5
value: 44.189
- type: precision_at_1
value: 26.173999999999996
- type: precision_at_10
value: 7.838000000000001
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.287
- type: precision_at_5
value: 12.191
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 78.378
- type: recall_at_100
value: 98.222
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 48.862
- type: recall_at_5
value: 60.953
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 42.31689035788179
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 31.280245136660984
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.79109720839415
- type: mrr
value: 71.79615705931495
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 76.44918756608115
- type: cos_sim_spearman
value: 70.86607256286257
- type: euclidean_pearson
value: 74.12154678100815
- type: euclidean_spearman
value: 70.86607256286257
- type: manhattan_pearson
value: 74.0078626964417
- type: manhattan_spearman
value: 70.68353828321327
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 75.40584415584415
- type: f1
value: 74.29514617572676
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.41860080664014
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.319217023090705
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.595000000000002
- type: map_at_10
value: 36.556
- type: map_at_100
value: 37.984
- type: map_at_1000
value: 38.134
- type: map_at_3
value: 33.417
- type: map_at_5
value: 35.160000000000004
- type: mrr_at_1
value: 32.761
- type: mrr_at_10
value: 41.799
- type: mrr_at_100
value: 42.526
- type: mrr_at_1000
value: 42.582
- type: mrr_at_3
value: 39.39
- type: mrr_at_5
value: 40.727000000000004
- type: ndcg_at_1
value: 32.761
- type: ndcg_at_10
value: 42.549
- type: ndcg_at_100
value: 47.915
- type: ndcg_at_1000
value: 50.475
- type: ndcg_at_3
value: 37.93
- type: ndcg_at_5
value: 39.939
- type: precision_at_1
value: 32.761
- type: precision_at_10
value: 8.312
- type: precision_at_100
value: 1.403
- type: precision_at_1000
value: 0.197
- type: precision_at_3
value: 18.741
- type: precision_at_5
value: 13.447999999999999
- type: recall_at_1
value: 26.595000000000002
- type: recall_at_10
value: 54.332
- type: recall_at_100
value: 76.936
- type: recall_at_1000
value: 93.914
- type: recall_at_3
value: 40.666000000000004
- type: recall_at_5
value: 46.513
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.528000000000002
- type: map_at_10
value: 30.751
- type: map_at_100
value: 31.855
- type: map_at_1000
value: 31.972
- type: map_at_3
value: 28.465
- type: map_at_5
value: 29.738
- type: mrr_at_1
value: 28.662
- type: mrr_at_10
value: 35.912
- type: mrr_at_100
value: 36.726
- type: mrr_at_1000
value: 36.777
- type: mrr_at_3
value: 34.013
- type: mrr_at_5
value: 35.156
- type: ndcg_at_1
value: 28.662
- type: ndcg_at_10
value: 35.452
- type: ndcg_at_100
value: 40.1
- type: ndcg_at_1000
value: 42.323
- type: ndcg_at_3
value: 32.112
- type: ndcg_at_5
value: 33.638
- type: precision_at_1
value: 28.662
- type: precision_at_10
value: 6.688
- type: precision_at_100
value: 1.13
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 15.562999999999999
- type: precision_at_5
value: 11.019
- type: recall_at_1
value: 22.528000000000002
- type: recall_at_10
value: 43.748
- type: recall_at_100
value: 64.235
- type: recall_at_1000
value: 78.609
- type: recall_at_3
value: 33.937
- type: recall_at_5
value: 38.234
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.117999999999995
- type: map_at_10
value: 44.339
- type: map_at_100
value: 45.367000000000004
- type: map_at_1000
value: 45.437
- type: map_at_3
value: 41.195
- type: map_at_5
value: 42.922
- type: mrr_at_1
value: 38.37
- type: mrr_at_10
value: 47.786
- type: mrr_at_100
value: 48.522
- type: mrr_at_1000
value: 48.567
- type: mrr_at_3
value: 45.371
- type: mrr_at_5
value: 46.857
- type: ndcg_at_1
value: 38.37
- type: ndcg_at_10
value: 50.019999999999996
- type: ndcg_at_100
value: 54.36299999999999
- type: ndcg_at_1000
value: 55.897
- type: ndcg_at_3
value: 44.733000000000004
- type: ndcg_at_5
value: 47.292
- type: precision_at_1
value: 38.37
- type: precision_at_10
value: 8.288
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 20.293
- type: precision_at_5
value: 14.107
- type: recall_at_1
value: 33.117999999999995
- type: recall_at_10
value: 63.451
- type: recall_at_100
value: 82.767
- type: recall_at_1000
value: 93.786
- type: recall_at_3
value: 48.964999999999996
- type: recall_at_5
value: 55.358
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.028000000000002
- type: map_at_10
value: 23.186999999999998
- type: map_at_100
value: 24.236
- type: map_at_1000
value: 24.337
- type: map_at_3
value: 20.816000000000003
- type: map_at_5
value: 22.311
- type: mrr_at_1
value: 17.514
- type: mrr_at_10
value: 24.84
- type: mrr_at_100
value: 25.838
- type: mrr_at_1000
value: 25.924999999999997
- type: mrr_at_3
value: 22.542
- type: mrr_at_5
value: 24.04
- type: ndcg_at_1
value: 17.514
- type: ndcg_at_10
value: 27.391
- type: ndcg_at_100
value: 32.684999999999995
- type: ndcg_at_1000
value: 35.367
- type: ndcg_at_3
value: 22.820999999999998
- type: ndcg_at_5
value: 25.380999999999997
- type: precision_at_1
value: 17.514
- type: precision_at_10
value: 4.463
- type: precision_at_100
value: 0.745
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 10.019
- type: precision_at_5
value: 7.457999999999999
- type: recall_at_1
value: 16.028000000000002
- type: recall_at_10
value: 38.81
- type: recall_at_100
value: 63.295
- type: recall_at_1000
value: 83.762
- type: recall_at_3
value: 26.604
- type: recall_at_5
value: 32.727000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.962
- type: map_at_10
value: 17.218
- type: map_at_100
value: 18.321
- type: map_at_1000
value: 18.455
- type: map_at_3
value: 15.287999999999998
- type: map_at_5
value: 16.417
- type: mrr_at_1
value: 14.677000000000001
- type: mrr_at_10
value: 20.381
- type: mrr_at_100
value: 21.471999999999998
- type: mrr_at_1000
value: 21.566
- type: mrr_at_3
value: 18.448999999999998
- type: mrr_at_5
value: 19.587
- type: ndcg_at_1
value: 14.677000000000001
- type: ndcg_at_10
value: 20.86
- type: ndcg_at_100
value: 26.519
- type: ndcg_at_1000
value: 30.020000000000003
- type: ndcg_at_3
value: 17.208000000000002
- type: ndcg_at_5
value: 19.037000000000003
- type: precision_at_1
value: 14.677000000000001
- type: precision_at_10
value: 3.856
- type: precision_at_100
value: 0.7889999999999999
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 8.043
- type: precision_at_5
value: 6.069999999999999
- type: recall_at_1
value: 11.962
- type: recall_at_10
value: 28.994999999999997
- type: recall_at_100
value: 54.071999999999996
- type: recall_at_1000
value: 79.309
- type: recall_at_3
value: 19.134999999999998
- type: recall_at_5
value: 23.727999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.764
- type: map_at_10
value: 31.744
- type: map_at_100
value: 33.037
- type: map_at_1000
value: 33.156
- type: map_at_3
value: 29.015
- type: map_at_5
value: 30.434
- type: mrr_at_1
value: 28.296
- type: mrr_at_10
value: 37.03
- type: mrr_at_100
value: 37.902
- type: mrr_at_1000
value: 37.966
- type: mrr_at_3
value: 34.568
- type: mrr_at_5
value: 35.786
- type: ndcg_at_1
value: 28.296
- type: ndcg_at_10
value: 37.289
- type: ndcg_at_100
value: 42.787
- type: ndcg_at_1000
value: 45.382
- type: ndcg_at_3
value: 32.598
- type: ndcg_at_5
value: 34.521
- type: precision_at_1
value: 28.296
- type: precision_at_10
value: 6.901
- type: precision_at_100
value: 1.135
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 15.367
- type: precision_at_5
value: 11.03
- type: recall_at_1
value: 22.764
- type: recall_at_10
value: 48.807
- type: recall_at_100
value: 71.859
- type: recall_at_1000
value: 89.606
- type: recall_at_3
value: 35.594
- type: recall_at_5
value: 40.541
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.742
- type: map_at_10
value: 27.741
- type: map_at_100
value: 29.323
- type: map_at_1000
value: 29.438
- type: map_at_3
value: 25.217
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 24.657999999999998
- type: mrr_at_10
value: 32.407000000000004
- type: mrr_at_100
value: 33.631
- type: mrr_at_1000
value: 33.686
- type: mrr_at_3
value: 30.194
- type: mrr_at_5
value: 31.444
- type: ndcg_at_1
value: 24.657999999999998
- type: ndcg_at_10
value: 32.614
- type: ndcg_at_100
value: 39.61
- type: ndcg_at_1000
value: 42.114000000000004
- type: ndcg_at_3
value: 28.516000000000002
- type: ndcg_at_5
value: 30.274
- type: precision_at_1
value: 24.657999999999998
- type: precision_at_10
value: 6.176
- type: precision_at_100
value: 1.1400000000000001
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 13.927
- type: precision_at_5
value: 9.954
- type: recall_at_1
value: 19.742
- type: recall_at_10
value: 42.427
- type: recall_at_100
value: 72.687
- type: recall_at_1000
value: 89.89
- type: recall_at_3
value: 30.781
- type: recall_at_5
value: 35.606
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.72608333333333
- type: map_at_10
value: 27.165333333333336
- type: map_at_100
value: 28.292499999999997
- type: map_at_1000
value: 28.416333333333327
- type: map_at_3
value: 24.783833333333334
- type: map_at_5
value: 26.101750000000003
- type: mrr_at_1
value: 23.721500000000002
- type: mrr_at_10
value: 30.853333333333328
- type: mrr_at_100
value: 31.741750000000003
- type: mrr_at_1000
value: 31.812999999999995
- type: mrr_at_3
value: 28.732249999999997
- type: mrr_at_5
value: 29.945166666666665
- type: ndcg_at_1
value: 23.721500000000002
- type: ndcg_at_10
value: 31.74883333333333
- type: ndcg_at_100
value: 36.883583333333334
- type: ndcg_at_1000
value: 39.6145
- type: ndcg_at_3
value: 27.639583333333334
- type: ndcg_at_5
value: 29.543666666666667
- type: precision_at_1
value: 23.721500000000002
- type: precision_at_10
value: 5.709083333333333
- type: precision_at_100
value: 0.9859166666666666
- type: precision_at_1000
value: 0.1413333333333333
- type: precision_at_3
value: 12.85683333333333
- type: precision_at_5
value: 9.258166666666668
- type: recall_at_1
value: 19.72608333333333
- type: recall_at_10
value: 41.73583333333334
- type: recall_at_100
value: 64.66566666666668
- type: recall_at_1000
value: 84.09833333333336
- type: recall_at_3
value: 30.223083333333328
- type: recall_at_5
value: 35.153083333333335
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.582
- type: map_at_10
value: 22.803
- type: map_at_100
value: 23.503
- type: map_at_1000
value: 23.599999999999998
- type: map_at_3
value: 21.375
- type: map_at_5
value: 22.052
- type: mrr_at_1
value: 20.399
- type: mrr_at_10
value: 25.369999999999997
- type: mrr_at_100
value: 26.016000000000002
- type: mrr_at_1000
value: 26.090999999999998
- type: mrr_at_3
value: 23.952
- type: mrr_at_5
value: 24.619
- type: ndcg_at_1
value: 20.399
- type: ndcg_at_10
value: 25.964
- type: ndcg_at_100
value: 29.607
- type: ndcg_at_1000
value: 32.349
- type: ndcg_at_3
value: 23.177
- type: ndcg_at_5
value: 24.276
- type: precision_at_1
value: 20.399
- type: precision_at_10
value: 4.018
- type: precision_at_100
value: 0.629
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 9.969
- type: precision_at_5
value: 6.748
- type: recall_at_1
value: 17.582
- type: recall_at_10
value: 33.35
- type: recall_at_100
value: 50.219
- type: recall_at_1000
value: 71.06099999999999
- type: recall_at_3
value: 25.619999999999997
- type: recall_at_5
value: 28.291
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.071
- type: map_at_10
value: 16.201999999999998
- type: map_at_100
value: 17.112
- type: map_at_1000
value: 17.238
- type: map_at_3
value: 14.508
- type: map_at_5
value: 15.440999999999999
- type: mrr_at_1
value: 13.833
- type: mrr_at_10
value: 19.235
- type: mrr_at_100
value: 20.108999999999998
- type: mrr_at_1000
value: 20.196
- type: mrr_at_3
value: 17.515
- type: mrr_at_5
value: 18.505
- type: ndcg_at_1
value: 13.833
- type: ndcg_at_10
value: 19.643
- type: ndcg_at_100
value: 24.298000000000002
- type: ndcg_at_1000
value: 27.614
- type: ndcg_at_3
value: 16.528000000000002
- type: ndcg_at_5
value: 17.991
- type: precision_at_1
value: 13.833
- type: precision_at_10
value: 3.6990000000000003
- type: precision_at_100
value: 0.713
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 7.9030000000000005
- type: precision_at_5
value: 5.891
- type: recall_at_1
value: 11.071
- type: recall_at_10
value: 27.019
- type: recall_at_100
value: 48.404
- type: recall_at_1000
value: 72.641
- type: recall_at_3
value: 18.336
- type: recall_at_5
value: 21.991
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.573
- type: map_at_10
value: 25.008999999999997
- type: map_at_100
value: 26.015
- type: map_at_1000
value: 26.137
- type: map_at_3
value: 22.798
- type: map_at_5
value: 24.092
- type: mrr_at_1
value: 22.108
- type: mrr_at_10
value: 28.646
- type: mrr_at_100
value: 29.477999999999998
- type: mrr_at_1000
value: 29.57
- type: mrr_at_3
value: 26.415
- type: mrr_at_5
value: 27.693
- type: ndcg_at_1
value: 22.108
- type: ndcg_at_10
value: 29.42
- type: ndcg_at_100
value: 34.385
- type: ndcg_at_1000
value: 37.572
- type: ndcg_at_3
value: 25.274
- type: ndcg_at_5
value: 27.315
- type: precision_at_1
value: 22.108
- type: precision_at_10
value: 5.093
- type: precision_at_100
value: 0.859
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 11.474
- type: precision_at_5
value: 8.321000000000002
- type: recall_at_1
value: 18.573
- type: recall_at_10
value: 39.433
- type: recall_at_100
value: 61.597
- type: recall_at_1000
value: 84.69
- type: recall_at_3
value: 27.849
- type: recall_at_5
value: 33.202999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.807
- type: map_at_10
value: 30.014000000000003
- type: map_at_100
value: 31.422
- type: map_at_1000
value: 31.652
- type: map_at_3
value: 27.447
- type: map_at_5
value: 28.711
- type: mrr_at_1
value: 27.668
- type: mrr_at_10
value: 34.489
- type: mrr_at_100
value: 35.453
- type: mrr_at_1000
value: 35.526
- type: mrr_at_3
value: 32.477000000000004
- type: mrr_at_5
value: 33.603
- type: ndcg_at_1
value: 27.668
- type: ndcg_at_10
value: 34.983
- type: ndcg_at_100
value: 40.535
- type: ndcg_at_1000
value: 43.747
- type: ndcg_at_3
value: 31.026999999999997
- type: ndcg_at_5
value: 32.608
- type: precision_at_1
value: 27.668
- type: precision_at_10
value: 6.837999999999999
- type: precision_at_100
value: 1.411
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 14.295
- type: precision_at_5
value: 10.435
- type: recall_at_1
value: 22.807
- type: recall_at_10
value: 43.545
- type: recall_at_100
value: 69.39800000000001
- type: recall_at_1000
value: 90.706
- type: recall_at_3
value: 32.183
- type: recall_at_5
value: 36.563
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.943
- type: map_at_10
value: 20.419999999999998
- type: map_at_100
value: 21.335
- type: map_at_1000
value: 21.44
- type: map_at_3
value: 17.865000000000002
- type: map_at_5
value: 19.36
- type: mrr_at_1
value: 15.712000000000002
- type: mrr_at_10
value: 22.345000000000002
- type: mrr_at_100
value: 23.227999999999998
- type: mrr_at_1000
value: 23.304
- type: mrr_at_3
value: 19.901
- type: mrr_at_5
value: 21.325
- type: ndcg_at_1
value: 15.712000000000002
- type: ndcg_at_10
value: 24.801000000000002
- type: ndcg_at_100
value: 29.799
- type: ndcg_at_1000
value: 32.513999999999996
- type: ndcg_at_3
value: 19.750999999999998
- type: ndcg_at_5
value: 22.252
- type: precision_at_1
value: 15.712000000000002
- type: precision_at_10
value: 4.1770000000000005
- type: precision_at_100
value: 0.738
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 8.688
- type: precision_at_5
value: 6.617000000000001
- type: recall_at_1
value: 13.943
- type: recall_at_10
value: 36.913000000000004
- type: recall_at_100
value: 60.519
- type: recall_at_1000
value: 81.206
- type: recall_at_3
value: 23.006999999999998
- type: recall_at_5
value: 29.082
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.468
- type: map_at_10
value: 16.029
- type: map_at_100
value: 17.693
- type: map_at_1000
value: 17.886
- type: map_at_3
value: 13.15
- type: map_at_5
value: 14.568
- type: mrr_at_1
value: 21.173000000000002
- type: mrr_at_10
value: 31.028
- type: mrr_at_100
value: 32.061
- type: mrr_at_1000
value: 32.119
- type: mrr_at_3
value: 27.534999999999997
- type: mrr_at_5
value: 29.431
- type: ndcg_at_1
value: 21.173000000000002
- type: ndcg_at_10
value: 23.224
- type: ndcg_at_100
value: 30.225
- type: ndcg_at_1000
value: 33.961000000000006
- type: ndcg_at_3
value: 18.174
- type: ndcg_at_5
value: 19.897000000000002
- type: precision_at_1
value: 21.173000000000002
- type: precision_at_10
value: 7.4719999999999995
- type: precision_at_100
value: 1.5010000000000001
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 13.312
- type: precision_at_5
value: 10.619
- type: recall_at_1
value: 9.468
- type: recall_at_10
value: 28.823
- type: recall_at_100
value: 53.26499999999999
- type: recall_at_1000
value: 74.536
- type: recall_at_3
value: 16.672
- type: recall_at_5
value: 21.302
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.343
- type: map_at_10
value: 12.717
- type: map_at_100
value: 16.48
- type: map_at_1000
value: 17.381
- type: map_at_3
value: 9.568999999999999
- type: map_at_5
value: 11.125
- type: mrr_at_1
value: 48.75
- type: mrr_at_10
value: 58.425000000000004
- type: mrr_at_100
value: 59.075
- type: mrr_at_1000
value: 59.095
- type: mrr_at_3
value: 56.291999999999994
- type: mrr_at_5
value: 57.679
- type: ndcg_at_1
value: 37.875
- type: ndcg_at_10
value: 27.77
- type: ndcg_at_100
value: 30.288999999999998
- type: ndcg_at_1000
value: 36.187999999999995
- type: ndcg_at_3
value: 31.385999999999996
- type: ndcg_at_5
value: 29.923
- type: precision_at_1
value: 48.75
- type: precision_at_10
value: 22.375
- type: precision_at_100
value: 6.3420000000000005
- type: precision_at_1000
value: 1.4489999999999998
- type: precision_at_3
value: 35.5
- type: precision_at_5
value: 30.55
- type: recall_at_1
value: 6.343
- type: recall_at_10
value: 16.936
- type: recall_at_100
value: 35.955999999999996
- type: recall_at_1000
value: 55.787
- type: recall_at_3
value: 10.771
- type: recall_at_5
value: 13.669999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 41.99
- type: f1
value: 36.823402174564954
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.088
- type: map_at_10
value: 52.69200000000001
- type: map_at_100
value: 53.296
- type: map_at_1000
value: 53.325
- type: map_at_3
value: 49.905
- type: map_at_5
value: 51.617000000000004
- type: mrr_at_1
value: 43.009
- type: mrr_at_10
value: 56.203
- type: mrr_at_100
value: 56.75
- type: mrr_at_1000
value: 56.769000000000005
- type: mrr_at_3
value: 53.400000000000006
- type: mrr_at_5
value: 55.163
- type: ndcg_at_1
value: 43.009
- type: ndcg_at_10
value: 59.39
- type: ndcg_at_100
value: 62.129999999999995
- type: ndcg_at_1000
value: 62.793
- type: ndcg_at_3
value: 53.878
- type: ndcg_at_5
value: 56.887
- type: precision_at_1
value: 43.009
- type: precision_at_10
value: 8.366
- type: precision_at_100
value: 0.983
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 22.377
- type: precision_at_5
value: 15.035000000000002
- type: recall_at_1
value: 40.088
- type: recall_at_10
value: 76.68700000000001
- type: recall_at_100
value: 88.91
- type: recall_at_1000
value: 93.782
- type: recall_at_3
value: 61.809999999999995
- type: recall_at_5
value: 69.131
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.817
- type: map_at_10
value: 18.9
- type: map_at_100
value: 20.448
- type: map_at_1000
value: 20.660999999999998
- type: map_at_3
value: 15.979
- type: map_at_5
value: 17.415
- type: mrr_at_1
value: 23.148
- type: mrr_at_10
value: 31.208000000000002
- type: mrr_at_100
value: 32.167
- type: mrr_at_1000
value: 32.242
- type: mrr_at_3
value: 28.498
- type: mrr_at_5
value: 29.964000000000002
- type: ndcg_at_1
value: 23.148
- type: ndcg_at_10
value: 25.325999999999997
- type: ndcg_at_100
value: 31.927
- type: ndcg_at_1000
value: 36.081
- type: ndcg_at_3
value: 21.647
- type: ndcg_at_5
value: 22.762999999999998
- type: precision_at_1
value: 23.148
- type: precision_at_10
value: 7.546
- type: precision_at_100
value: 1.415
- type: precision_at_1000
value: 0.216
- type: precision_at_3
value: 14.969
- type: precision_at_5
value: 11.327
- type: recall_at_1
value: 10.817
- type: recall_at_10
value: 32.164
- type: recall_at_100
value: 57.655
- type: recall_at_1000
value: 82.797
- type: recall_at_3
value: 19.709
- type: recall_at_5
value: 24.333
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.380999999999997
- type: map_at_10
value: 33.14
- type: map_at_100
value: 33.948
- type: map_at_1000
value: 34.028000000000006
- type: map_at_3
value: 31.019999999999996
- type: map_at_5
value: 32.23
- type: mrr_at_1
value: 50.763000000000005
- type: mrr_at_10
value: 57.899
- type: mrr_at_100
value: 58.426
- type: mrr_at_1000
value: 58.457
- type: mrr_at_3
value: 56.093
- type: mrr_at_5
value: 57.116
- type: ndcg_at_1
value: 50.763000000000005
- type: ndcg_at_10
value: 41.656
- type: ndcg_at_100
value: 45.079
- type: ndcg_at_1000
value: 46.916999999999994
- type: ndcg_at_3
value: 37.834
- type: ndcg_at_5
value: 39.732
- type: precision_at_1
value: 50.763000000000005
- type: precision_at_10
value: 8.648
- type: precision_at_100
value: 1.135
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 23.105999999999998
- type: precision_at_5
value: 15.363
- type: recall_at_1
value: 25.380999999999997
- type: recall_at_10
value: 43.241
- type: recall_at_100
value: 56.745000000000005
- type: recall_at_1000
value: 69.048
- type: recall_at_3
value: 34.659
- type: recall_at_5
value: 38.406
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 79.544
- type: ap
value: 73.82920133396664
- type: f1
value: 79.51048124883265
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 11.174000000000001
- type: map_at_10
value: 19.451999999999998
- type: map_at_100
value: 20.612
- type: map_at_1000
value: 20.703
- type: map_at_3
value: 16.444
- type: map_at_5
value: 18.083
- type: mrr_at_1
value: 11.447000000000001
- type: mrr_at_10
value: 19.808
- type: mrr_at_100
value: 20.958
- type: mrr_at_1000
value: 21.041999999999998
- type: mrr_at_3
value: 16.791
- type: mrr_at_5
value: 18.459
- type: ndcg_at_1
value: 11.447000000000001
- type: ndcg_at_10
value: 24.556
- type: ndcg_at_100
value: 30.637999999999998
- type: ndcg_at_1000
value: 33.14
- type: ndcg_at_3
value: 18.325
- type: ndcg_at_5
value: 21.278
- type: precision_at_1
value: 11.447000000000001
- type: precision_at_10
value: 4.215
- type: precision_at_100
value: 0.732
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 8.052
- type: precision_at_5
value: 6.318
- type: recall_at_1
value: 11.174000000000001
- type: recall_at_10
value: 40.543
- type: recall_at_100
value: 69.699
- type: recall_at_1000
value: 89.403
- type: recall_at_3
value: 23.442
- type: recall_at_5
value: 30.536
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.6671226630187
- type: f1
value: 89.57660424361246
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 60.284997720018254
- type: f1
value: 40.30637400152823
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.33557498318763
- type: f1
value: 60.24039910680179
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.37390719569603
- type: f1
value: 72.33097333477316
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.68158939060552
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.340061711905236
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.01814326295803
- type: mrr
value: 33.20555240055367
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.3910000000000005
- type: map_at_10
value: 7.7219999999999995
- type: map_at_100
value: 10.286
- type: map_at_1000
value: 11.668000000000001
- type: map_at_3
value: 5.552
- type: map_at_5
value: 6.468
- type: mrr_at_1
value: 34.365
- type: mrr_at_10
value: 42.555
- type: mrr_at_100
value: 43.295
- type: mrr_at_1000
value: 43.357
- type: mrr_at_3
value: 40.299
- type: mrr_at_5
value: 41.182
- type: ndcg_at_1
value: 31.424000000000003
- type: ndcg_at_10
value: 24.758
- type: ndcg_at_100
value: 23.677999999999997
- type: ndcg_at_1000
value: 33.377
- type: ndcg_at_3
value: 28.302
- type: ndcg_at_5
value: 26.342
- type: precision_at_1
value: 33.437
- type: precision_at_10
value: 19.256999999999998
- type: precision_at_100
value: 6.662999999999999
- type: precision_at_1000
value: 1.9900000000000002
- type: precision_at_3
value: 27.761000000000003
- type: precision_at_5
value: 23.715
- type: recall_at_1
value: 3.3910000000000005
- type: recall_at_10
value: 11.068
- type: recall_at_100
value: 25.878
- type: recall_at_1000
value: 60.19
- type: recall_at_3
value: 6.1690000000000005
- type: recall_at_5
value: 7.767
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.168000000000001
- type: map_at_10
value: 26.177
- type: map_at_100
value: 27.564
- type: map_at_1000
value: 27.628999999999998
- type: map_at_3
value: 22.03
- type: map_at_5
value: 24.276
- type: mrr_at_1
value: 17.439
- type: mrr_at_10
value: 28.205000000000002
- type: mrr_at_100
value: 29.357
- type: mrr_at_1000
value: 29.408
- type: mrr_at_3
value: 24.377
- type: mrr_at_5
value: 26.540000000000003
- type: ndcg_at_1
value: 17.41
- type: ndcg_at_10
value: 32.936
- type: ndcg_at_100
value: 39.196999999999996
- type: ndcg_at_1000
value: 40.892
- type: ndcg_at_3
value: 24.721
- type: ndcg_at_5
value: 28.615000000000002
- type: precision_at_1
value: 17.41
- type: precision_at_10
value: 6.199000000000001
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 11.790000000000001
- type: precision_at_5
value: 9.264
- type: recall_at_1
value: 15.168000000000001
- type: recall_at_10
value: 51.914
- type: recall_at_100
value: 79.804
- type: recall_at_1000
value: 92.75999999999999
- type: recall_at_3
value: 30.212
- type: recall_at_5
value: 39.204
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.306
- type: map_at_10
value: 80.634
- type: map_at_100
value: 81.349
- type: map_at_1000
value: 81.37299999999999
- type: map_at_3
value: 77.691
- type: map_at_5
value: 79.512
- type: mrr_at_1
value: 77.56
- type: mrr_at_10
value: 84.177
- type: mrr_at_100
value: 84.35000000000001
- type: mrr_at_1000
value: 84.353
- type: mrr_at_3
value: 83.003
- type: mrr_at_5
value: 83.799
- type: ndcg_at_1
value: 77.58
- type: ndcg_at_10
value: 84.782
- type: ndcg_at_100
value: 86.443
- type: ndcg_at_1000
value: 86.654
- type: ndcg_at_3
value: 81.67
- type: ndcg_at_5
value: 83.356
- type: precision_at_1
value: 77.58
- type: precision_at_10
value: 12.875
- type: precision_at_100
value: 1.503
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.63
- type: precision_at_5
value: 23.483999999999998
- type: recall_at_1
value: 67.306
- type: recall_at_10
value: 92.64
- type: recall_at_100
value: 98.681
- type: recall_at_1000
value: 99.79
- type: recall_at_3
value: 83.682
- type: recall_at_5
value: 88.424
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 50.76319866126382
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.024711941648995
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.9379999999999997
- type: map_at_10
value: 8.817
- type: map_at_100
value: 10.546999999999999
- type: map_at_1000
value: 10.852
- type: map_at_3
value: 6.351999999999999
- type: map_at_5
value: 7.453
- type: mrr_at_1
value: 19.400000000000002
- type: mrr_at_10
value: 27.371000000000002
- type: mrr_at_100
value: 28.671999999999997
- type: mrr_at_1000
value: 28.747
- type: mrr_at_3
value: 24.583
- type: mrr_at_5
value: 26.143
- type: ndcg_at_1
value: 19.400000000000002
- type: ndcg_at_10
value: 15.264
- type: ndcg_at_100
value: 22.63
- type: ndcg_at_1000
value: 28.559
- type: ndcg_at_3
value: 14.424999999999999
- type: ndcg_at_5
value: 12.520000000000001
- type: precision_at_1
value: 19.400000000000002
- type: precision_at_10
value: 7.8100000000000005
- type: precision_at_100
value: 1.854
- type: precision_at_1000
value: 0.329
- type: precision_at_3
value: 13.100000000000001
- type: precision_at_5
value: 10.68
- type: recall_at_1
value: 3.9379999999999997
- type: recall_at_10
value: 15.903
- type: recall_at_100
value: 37.645
- type: recall_at_1000
value: 66.86
- type: recall_at_3
value: 7.993
- type: recall_at_5
value: 10.885
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 80.12689060151425
- type: cos_sim_spearman
value: 70.46515535094771
- type: euclidean_pearson
value: 77.17160003557223
- type: euclidean_spearman
value: 70.4651757047438
- type: manhattan_pearson
value: 77.18129609281937
- type: manhattan_spearman
value: 70.46610403752913
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 70.451157033355
- type: cos_sim_spearman
value: 63.99899601697852
- type: euclidean_pearson
value: 67.46985359967678
- type: euclidean_spearman
value: 64.00001637764805
- type: manhattan_pearson
value: 67.56534741780037
- type: manhattan_spearman
value: 64.06533893575366
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.65086614464292
- type: cos_sim_spearman
value: 78.20169706921848
- type: euclidean_pearson
value: 77.77758172155283
- type: euclidean_spearman
value: 78.20169706921848
- type: manhattan_pearson
value: 77.75077884860052
- type: manhattan_spearman
value: 78.16875216484164
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 76.26381598259717
- type: cos_sim_spearman
value: 70.78377709313477
- type: euclidean_pearson
value: 74.82646556532096
- type: euclidean_spearman
value: 70.78377658155212
- type: manhattan_pearson
value: 74.81784766108225
- type: manhattan_spearman
value: 70.79351454692176
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 79.00532026789739
- type: cos_sim_spearman
value: 80.02708383244838
- type: euclidean_pearson
value: 79.48345422610525
- type: euclidean_spearman
value: 80.02708383244838
- type: manhattan_pearson
value: 79.44519739854803
- type: manhattan_spearman
value: 79.98344094559687
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 77.32783048164805
- type: cos_sim_spearman
value: 78.79729961288045
- type: euclidean_pearson
value: 78.72111945793154
- type: euclidean_spearman
value: 78.79729904606872
- type: manhattan_pearson
value: 78.72464311117116
- type: manhattan_spearman
value: 78.822591248334
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.04318630630854
- type: cos_sim_spearman
value: 83.87886389259836
- type: euclidean_pearson
value: 83.40385877895086
- type: euclidean_spearman
value: 83.87886389259836
- type: manhattan_pearson
value: 83.46337128901547
- type: manhattan_spearman
value: 83.9723106941644
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.003511169944595
- type: cos_sim_spearman
value: 64.39318805580227
- type: euclidean_pearson
value: 65.4797990735967
- type: euclidean_spearman
value: 64.39318805580227
- type: manhattan_pearson
value: 65.44604544280844
- type: manhattan_spearman
value: 64.38742899984233
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 76.63101237585029
- type: cos_sim_spearman
value: 75.57446967644269
- type: euclidean_pearson
value: 76.93491768734478
- type: euclidean_spearman
value: 75.57446967644269
- type: manhattan_pearson
value: 76.92187567800636
- type: manhattan_spearman
value: 75.57239337194585
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.5376604868993
- type: mrr
value: 92.94422897364073
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.872
- type: map_at_10
value: 50.417
- type: map_at_100
value: 51.202000000000005
- type: map_at_1000
value: 51.25999999999999
- type: map_at_3
value: 47.02
- type: map_at_5
value: 49.326
- type: mrr_at_1
value: 41.0
- type: mrr_at_10
value: 51.674
- type: mrr_at_100
value: 52.32599999999999
- type: mrr_at_1000
value: 52.376999999999995
- type: mrr_at_3
value: 48.778
- type: mrr_at_5
value: 50.744
- type: ndcg_at_1
value: 41.0
- type: ndcg_at_10
value: 56.027
- type: ndcg_at_100
value: 59.362
- type: ndcg_at_1000
value: 60.839
- type: ndcg_at_3
value: 50.019999999999996
- type: ndcg_at_5
value: 53.644999999999996
- type: precision_at_1
value: 41.0
- type: precision_at_10
value: 8.1
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 20.444000000000003
- type: precision_at_5
value: 14.466999999999999
- type: recall_at_1
value: 38.872
- type: recall_at_10
value: 71.906
- type: recall_at_100
value: 86.367
- type: recall_at_1000
value: 98.0
- type: recall_at_3
value: 56.206
- type: recall_at_5
value: 65.05
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.7039603960396
- type: cos_sim_ap
value: 90.40809844250262
- type: cos_sim_f1
value: 84.53181583031557
- type: cos_sim_precision
value: 87.56698821007502
- type: cos_sim_recall
value: 81.69999999999999
- type: dot_accuracy
value: 99.7039603960396
- type: dot_ap
value: 90.40809844250262
- type: dot_f1
value: 84.53181583031557
- type: dot_precision
value: 87.56698821007502
- type: dot_recall
value: 81.69999999999999
- type: euclidean_accuracy
value: 99.7039603960396
- type: euclidean_ap
value: 90.4080982863383
- type: euclidean_f1
value: 84.53181583031557
- type: euclidean_precision
value: 87.56698821007502
- type: euclidean_recall
value: 81.69999999999999
- type: manhattan_accuracy
value: 99.7
- type: manhattan_ap
value: 90.39771161966652
- type: manhattan_f1
value: 84.32989690721648
- type: manhattan_precision
value: 87.02127659574468
- type: manhattan_recall
value: 81.8
- type: max_accuracy
value: 99.7039603960396
- type: max_ap
value: 90.40809844250262
- type: max_f1
value: 84.53181583031557
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 59.663210666678715
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.107791216468776
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 46.440691925067604
- type: mrr
value: 47.03390257618199
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.067177519784074
- type: cos_sim_spearman
value: 31.234728424648967
- type: dot_pearson
value: 31.06717083018107
- type: dot_spearman
value: 31.234728424648967
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.136
- type: map_at_10
value: 0.767
- type: map_at_100
value: 3.3689999999999998
- type: map_at_1000
value: 8.613999999999999
- type: map_at_3
value: 0.369
- type: map_at_5
value: 0.514
- type: mrr_at_1
value: 48.0
- type: mrr_at_10
value: 63.908
- type: mrr_at_100
value: 64.615
- type: mrr_at_1000
value: 64.615
- type: mrr_at_3
value: 62.0
- type: mrr_at_5
value: 63.4
- type: ndcg_at_1
value: 44.0
- type: ndcg_at_10
value: 38.579
- type: ndcg_at_100
value: 26.409
- type: ndcg_at_1000
value: 26.858999999999998
- type: ndcg_at_3
value: 47.134
- type: ndcg_at_5
value: 43.287
- type: precision_at_1
value: 48.0
- type: precision_at_10
value: 40.400000000000006
- type: precision_at_100
value: 26.640000000000004
- type: precision_at_1000
value: 12.04
- type: precision_at_3
value: 52.666999999999994
- type: precision_at_5
value: 46.800000000000004
- type: recall_at_1
value: 0.136
- type: recall_at_10
value: 1.0070000000000001
- type: recall_at_100
value: 6.318
- type: recall_at_1000
value: 26.522000000000002
- type: recall_at_3
value: 0.41700000000000004
- type: recall_at_5
value: 0.606
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.9949999999999999
- type: map_at_10
value: 8.304
- type: map_at_100
value: 13.644
- type: map_at_1000
value: 15.43
- type: map_at_3
value: 4.788
- type: map_at_5
value: 6.22
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 37.658
- type: mrr_at_100
value: 38.491
- type: mrr_at_1000
value: 38.503
- type: mrr_at_3
value: 32.312999999999995
- type: mrr_at_5
value: 35.68
- type: ndcg_at_1
value: 21.429000000000002
- type: ndcg_at_10
value: 18.995
- type: ndcg_at_100
value: 32.029999999999994
- type: ndcg_at_1000
value: 44.852
- type: ndcg_at_3
value: 19.464000000000002
- type: ndcg_at_5
value: 19.172
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 17.143
- type: precision_at_100
value: 6.877999999999999
- type: precision_at_1000
value: 1.524
- type: precision_at_3
value: 21.769
- type: precision_at_5
value: 20.0
- type: recall_at_1
value: 1.9949999999999999
- type: recall_at_10
value: 13.395999999999999
- type: recall_at_100
value: 44.348
- type: recall_at_1000
value: 82.622
- type: recall_at_3
value: 5.896
- type: recall_at_5
value: 8.554
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 67.9394
- type: ap
value: 12.943337263423334
- type: f1
value: 52.28243093094156
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.414827391058296
- type: f1
value: 56.666412409573105
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 47.009746255495465
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.02574953805807
- type: cos_sim_ap
value: 67.66599910763128
- type: cos_sim_f1
value: 63.491277990844985
- type: cos_sim_precision
value: 59.77172140694154
- type: cos_sim_recall
value: 67.70448548812665
- type: dot_accuracy
value: 84.02574953805807
- type: dot_ap
value: 67.66600090945406
- type: dot_f1
value: 63.491277990844985
- type: dot_precision
value: 59.77172140694154
- type: dot_recall
value: 67.70448548812665
- type: euclidean_accuracy
value: 84.02574953805807
- type: euclidean_ap
value: 67.6659842364448
- type: euclidean_f1
value: 63.491277990844985
- type: euclidean_precision
value: 59.77172140694154
- type: euclidean_recall
value: 67.70448548812665
- type: manhattan_accuracy
value: 84.0317100792752
- type: manhattan_ap
value: 67.66351692448987
- type: manhattan_f1
value: 63.48610948306178
- type: manhattan_precision
value: 57.11875131828729
- type: manhattan_recall
value: 71.45118733509234
- type: max_accuracy
value: 84.0317100792752
- type: max_ap
value: 67.66600090945406
- type: max_f1
value: 63.491277990844985
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.53832421314084
- type: cos_sim_ap
value: 83.11416594316626
- type: cos_sim_f1
value: 75.41118114347518
- type: cos_sim_precision
value: 73.12839059674504
- type: cos_sim_recall
value: 77.8410840776101
- type: dot_accuracy
value: 87.53832421314084
- type: dot_ap
value: 83.11416226342155
- type: dot_f1
value: 75.41118114347518
- type: dot_precision
value: 73.12839059674504
- type: dot_recall
value: 77.8410840776101
- type: euclidean_accuracy
value: 87.53832421314084
- type: euclidean_ap
value: 83.11416284455395
- type: euclidean_f1
value: 75.41118114347518
- type: euclidean_precision
value: 73.12839059674504
- type: euclidean_recall
value: 77.8410840776101
- type: manhattan_accuracy
value: 87.49369348391353
- type: manhattan_ap
value: 83.08066812574694
- type: manhattan_f1
value: 75.36561228603892
- type: manhattan_precision
value: 71.9202518363064
- type: manhattan_recall
value: 79.15768401601478
- type: max_accuracy
value: 87.53832421314084
- type: max_ap
value: 83.11416594316626
- type: max_f1
value: 75.41118114347518
---
|
jcolab5/Llama-2-7b-chat-hf-fine-tuned-adapters | jcolab5 | 2023-11-09T12:08:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2023-11-09T12:07:57Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
VanoInvestigations/bertin-gpt-j-6B-es-finetuned-BOE-summary-LoRA | VanoInvestigations | 2023-11-09T12:06:39Z | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bertin-project/bertin-gpt-j-6B",
"base_model:adapter:bertin-project/bertin-gpt-j-6B",
"region:us"
]
| null | 2023-11-09T12:05:31Z | ---
library_name: peft
base_model: bertin-project/bertin-gpt-j-6B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.7.0.dev0
|
retmago/modelperson2 | retmago | 2023-11-09T12:03:56Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-09T11:26:40Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of retmago people
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - retmago/modelperson2
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of retmago people using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
GuysTrans/bart-base-vn-ehealth-vn-tokenizer | GuysTrans | 2023-11-09T12:02:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GuysTrans/bart-base-vn-ehealth-vn-tokenizer",
"base_model:finetune:GuysTrans/bart-base-vn-ehealth-vn-tokenizer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-06T04:22:17Z | ---
license: apache-2.0
base_model: GuysTrans/bart-base-vn-ehealth-vn-tokenizer
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-vn-ehealth-vn-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-vn-ehealth-vn-tokenizer
This model is a fine-tuned version of [GuysTrans/bart-base-vn-ehealth-vn-tokenizer](https://huggingface.co/GuysTrans/bart-base-vn-ehealth-vn-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8861
- Rouge1: 18.3743
- Rouge2: 8.2872
- Rougel: 15.0815
- Rougelsum: 16.566
- Bleu-1: 0.0006
- Bleu-2: 0.0004
- Bleu-3: 0.0003
- Bleu-4: 0.0002
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:------:|:------:|:------:|:------:|:-------:|
| 2.0519 | 1.0 | 21772 | 1.8861 | 18.3743 | 8.2872 | 15.0815 | 16.566 | 0.0006 | 0.0004 | 0.0003 | 0.0002 | 20.0 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
OpenBuddy/openbuddy-llemma-34b-v13.2 | OpenBuddy | 2023-11-09T11:52:24Z | 2,263 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-04T05:20:00Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: llama2
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/EleutherAI/llemma_34b
License: llama2
This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement.
This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA.
If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
rathi2023/zephy_finetuned_nvidiaQA_chatbot | rathi2023 | 2023-11-09T11:49:52Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
]
| null | 2023-11-08T21:31:27Z | ---
license: mit
base_model: TheBloke/zephyr-7B-alpha-GPTQ
tags:
- generated_from_trainer
model-index:
- name: zephy_finetuned_nvidiaQA_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephy_finetuned_nvidiaQA_chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
TheBloke/Yi-34B-GiftedConvo-merged-GPTQ | TheBloke | 2023-11-09T11:46:18Z | 22 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:NobodyExistsOnTheInternet/GiftedConvoBeforeEcons",
"base_model:NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged",
"base_model:quantized:NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-11-08T23:49:35Z | ---
base_model: NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged
datasets:
- NobodyExistsOnTheInternet/GiftedConvoBeforeEcons
inference: false
license: mit
model_creator: Nobody.png
model_name: Yi 34B GiftedConvo Llama
model_type: llama
prompt_template: 'USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi 34B GiftedConvo Llama - GPTQ
- Model creator: [Nobody.png](https://huggingface.co/NobodyExistsOnTheInternet)
- Original model: [Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Nobody.png's Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF)
* [Nobody.png's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant
```
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Nobody.png's Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged).
<!-- licensing end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 18.60 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 19.25 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 21.21 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 15.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 35.34 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 16.90 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 36.11 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Yi-34B-GiftedConvo-merged-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Yi-34B-GiftedConvo-merged-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Yi-34B-GiftedConvo-merged-GPTQ`:
```shell
mkdir Yi-34B-GiftedConvo-merged-GPTQ
huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GPTQ --local-dir Yi-34B-GiftedConvo-merged-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Yi-34B-GiftedConvo-merged-GPTQ
huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Yi-34B-GiftedConvo-merged-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Yi-34B-GiftedConvo-merged-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GPTQ --local-dir Yi-34B-GiftedConvo-merged-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Yi-34B-GiftedConvo-merged-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Yi-34B-GiftedConvo-merged-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Yi-34B-GiftedConvo-merged-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Yi-34B-GiftedConvo-merged-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Yi-34B-GiftedConvo-merged-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=True,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Nobody.png's Yi 34B GiftedConvo Llama
Trained on over 20k instruct generated all by gpt-4 or humans
Dataset features:
1000 long evolved conversations based off LIMA
Subsection of correct PRM800k data
Subsection of CamelAI's Physics and Chemistry data
The model is trained with Qlora as well as Axolotl.
|
TheBloke/Yi-34B-GiftedConvo-merged-GGUF | TheBloke | 2023-11-09T11:46:06Z | 97 | 8 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:NobodyExistsOnTheInternet/GiftedConvoBeforeEcons",
"base_model:NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged",
"base_model:quantized:NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged",
"license:mit",
"region:us"
]
| null | 2023-11-08T23:49:35Z | ---
base_model: NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged
datasets:
- NobodyExistsOnTheInternet/GiftedConvoBeforeEcons
inference: false
license: mit
model_creator: Nobody.png
model_name: Yi 34B GiftedConvo Llama
model_type: llama
prompt_template: 'USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi 34B GiftedConvo Llama - GGUF
- Model creator: [Nobody.png](https://huggingface.co/NobodyExistsOnTheInternet)
- Original model: [Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Nobody.png's Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF)
* [Nobody.png's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant
```
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Nobody.png's Yi 34B GiftedConvo Llama](https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [yi-34b-giftedconvo-merged.Q2_K.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [yi-34b-giftedconvo-merged.Q3_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [yi-34b-giftedconvo-merged.Q3_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [yi-34b-giftedconvo-merged.Q3_K_L.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [yi-34b-giftedconvo-merged.Q4_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [yi-34b-giftedconvo-merged.Q4_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss |
| [yi-34b-giftedconvo-merged.Q4_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [yi-34b-giftedconvo-merged.Q5_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [yi-34b-giftedconvo-merged.Q5_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [yi-34b-giftedconvo-merged.Q5_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [yi-34b-giftedconvo-merged.Q6_K.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [yi-34b-giftedconvo-merged.Q8_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GiftedConvo-merged-GGUF/blob/main/yi-34b-giftedconvo-merged.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Yi-34B-GiftedConvo-merged-GGUF and below it, a specific filename to download, such as: yi-34b-giftedconvo-merged.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GGUF yi-34b-giftedconvo-merged.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-34B-GiftedConvo-merged-GGUF yi-34b-giftedconvo-merged.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m yi-34b-giftedconvo-merged.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yi-34B-GiftedConvo-merged-GGUF", model_file="yi-34b-giftedconvo-merged.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Nobody.png's Yi 34B GiftedConvo Llama
Trained on over 20k instruct generated all by gpt-4 or humans
Dataset features:
1000 long evolved conversations based off LIMA
Subsection of correct PRM800k data
Subsection of CamelAI's Physics and Chemistry data
The model is trained with Qlora as well as Axolotl.
<!-- original-model-card end -->
|
Lollitor/PEFTFineTuned | Lollitor | 2023-11-09T11:42:31Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Lollitor/ColabFinished",
"base_model:adapter:Lollitor/ColabFinished",
"region:us"
]
| null | 2023-11-09T11:42:25Z | ---
library_name: peft
base_model: Lollitor/ColabFinished
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
AdnanRiaz107/huggingfacecodebert-base-mlm-finetuned-the-stack-bash | AdnanRiaz107 | 2023-11-09T11:37:30Z | 17 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-generation",
"generated_from_trainer",
"base_model:microsoft/codebert-base-mlm",
"base_model:finetune:microsoft/codebert-base-mlm",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-07T17:10:10Z | ---
base_model: microsoft/codebert-base-mlm
tags:
- generated_from_trainer
model-index:
- name: huggingfacecodebert-base-mlm-finetuned-the-stack-bash
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huggingfacecodebert-base-mlm-finetuned-the-stack-bash
This model is a fine-tuned version of [microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8761 | 0.05 | 500 | 3.0629 |
| 2.3622 | 0.1 | 1000 | 2.5288 |
| 2.5797 | 0.15 | 1500 | 2.3437 |
| 2.7985 | 0.2 | 2000 | 2.1884 |
| 2.6333 | 0.25 | 2500 | 2.1099 |
| 2.2955 | 0.3 | 3000 | 2.0732 |
| 2.4228 | 0.35 | 3500 | 2.0343 |
| 2.3224 | 0.4 | 4000 | 2.0015 |
| 2.1669 | 0.45 | 4500 | 1.9659 |
| 1.98 | 0.5 | 5000 | 1.9458 |
| 2.1847 | 0.55 | 5500 | 1.9258 |
| 2.1145 | 0.6 | 6000 | 1.9235 |
| 2.2392 | 0.65 | 6500 | 1.9019 |
| 2.1206 | 0.7 | 7000 | 1.9106 |
| 2.1796 | 0.75 | 7500 | 1.8852 |
| 2.5239 | 0.8 | 8000 | 1.8781 |
| 1.4346 | 0.85 | 8500 | 1.8754 |
| 2.3741 | 0.9 | 9000 | 1.8704 |
| 1.904 | 0.95 | 9500 | 1.8679 |
| 2.4298 | 1.0 | 10000 | 1.8719 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
gginterlude/ggv4 | gginterlude | 2023-11-09T11:36:25Z | 0 | 0 | null | [
"license:pddl",
"region:us"
]
| null | 2023-11-09T11:32:51Z | ---
license: pddl
license_name: interlude
license_link: LICENSE
---
|
MehdiHosseiniMoghadam/all-MiniLM-L6-v2-finetuned-marc-en | MehdiHosseiniMoghadam | 2023-11-09T11:20:18Z | 17 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-09T10:30:57Z | ---
license: apache-2.0
base_model: sentence-transformers/all-MiniLM-L6-v2
tags:
- generated_from_trainer
model-index:
- name: all-MiniLM-L6-v2-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-MiniLM-L6-v2-finetuned-marc-en
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8379
- Mae: 2.1394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9203 | 1.0 | 250 | 1.8916 | 2.2446 |
| 1.8419 | 2.0 | 500 | 1.8713 | 2.2371 |
| 1.8385 | 3.0 | 750 | 1.8523 | 2.1957 |
| 1.8256 | 4.0 | 1000 | 1.8422 | 2.1584 |
| 1.7618 | 5.0 | 1250 | 1.8339 | 2.1625 |
| 1.8025 | 6.0 | 1500 | 1.8326 | 2.1564 |
| 1.7414 | 7.0 | 1750 | 1.8329 | 2.1629 |
| 1.7539 | 8.0 | 2000 | 1.8322 | 2.173 |
| 1.7886 | 9.0 | 2250 | 1.8290 | 2.16 |
| 1.7611 | 10.0 | 2500 | 1.8292 | 2.1456 |
| 1.7339 | 11.0 | 2750 | 1.8324 | 2.1566 |
| 1.7093 | 12.0 | 3000 | 1.8366 | 2.1406 |
| 1.7164 | 13.0 | 3250 | 1.8371 | 2.1391 |
| 1.6847 | 14.0 | 3500 | 1.8389 | 2.139 |
| 1.7202 | 15.0 | 3750 | 1.8379 | 2.1394 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
nlaine/ppoLunarLander-v2 | nlaine | 2023-11-09T11:11:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T11:11:14Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.42 +/- 21.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
partypress/partypress-monolingual-germany | partypress | 2023-11-09T11:10:19Z | 21 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"partypress",
"political science",
"parties",
"press releases",
"de",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-05-29T11:23:48Z | ---
license: cc-by-sa-4.0
language:
- de
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- partypress
- political science
- parties
- press releases
widget:
- text: 'Zur Forderung des DGB-Chefs Hoffmann nach einer Debatte über Soziale Marktwirtschaft, erklärt der Sozialpolitische Sprecher der AfD-Bundestagsfraktion, Uwe Witt: „Die Soziale Marktwirtschaft steht vor der größten Herausforderung seit Bestehen der Bundesrepublik. Eine Beschäftigung damit, wie es in Zukunft weitergehen soll, ist dringend geboten. Wir haben in Deutschland noch immer den besten Sozialstaat Europas. Wenn wir diesen erhalten wollen, müssen wir aufhören, ihm die ökonomische Grundlage zu entziehen. Soziale Marktwirtschaft braucht zwingend einen funktionierenden und konkurrenzfähigen Mittelstand.'
---
# PARTYPRESS monolingual Germany
Fine-tuned model, based on [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased). Used in [Erfort et al. (2023)](https://doi.org/10.1177/20531680231183512), building on the PARTYPRESS database. For the downstream task of classyfing press releases from political parties into 23 unique policy areas we achieve a performance comparable to expert human coders.
## Model description
The PARTYPRESS monolingual model builds on [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) but has a supervised component. This means, it was fine-tuned using texts labeled by humans. The labels indicate 23 different political issue categories derived from the Comparative Agendas Project (CAP):
| Code | Issue |
|--|-------|
| 1 | Macroeconomics |
| 2 | Civil Rights |
| 3 | Health |
| 4 | Agriculture |
| 5 | Labor |
| 6 | Education |
| 7 | Environment |
| 8 | Energy |
| 9 | Immigration |
| 10 | Transportation |
| 12 | Law and Crime |
| 13 | Social Welfare |
| 14 | Housing |
| 15 | Domestic Commerce |
| 16 | Defense |
| 17 | Technology |
| 18 | Foreign Trade |
| 19.1 | International Affairs |
| 19.2 | European Union |
| 20 | Government Operations |
| 23 | Culture |
| 98 | Non-thematic |
| 99 | Other |
## Model variations
There are several monolingual models for different countries, and a multilingual model. The multilingual model can be easily extended to other languages, country contexts, or time periods by fine-tuning it with minimal additional labeled texts.
## Intended uses & limitations
The main use of the model is for text classification of press releases from political parties. It may also be useful for other political texts.
The classification can then be used to measure which issues parties are discussing in their communication.
### How to use
This model can be used directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}
>>> partypress = pipeline("text-classification", model = "cornelius/partypress-monolingual-germany", tokenizer = "cornelius/partypress-monolingual-germany", **tokenizer_kwargs)
>>> partypress("Your text here.")
```
### Limitations and bias
The model was trained with data from parties in Germany. For use in other countries, the model may be further fine-tuned. Without further fine-tuning, the performance of the model may be lower.
The model may have biased predictions. We discuss some biases by country, party, and over time in the release paper for the PARTYPRESS database. For example, the performance is highest for press releases from Ireland (75%) and lowest for Poland (55%).
## Training data
The PARTYPRESS multilingual model was fine-tuned with about 3,000 press releases from parties in Germany. The press releases were labeled by two expert human coders.
For the training data of the underlying model, please refer to [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased)
## Training procedure
### Preprocessing
For the preprocessing, please refer to [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased)
### Pretraining
For the pretraining, please refer to [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased)
### Fine-tuning
We fine-tuned the model using about 3,000 labeled press releases from political parties in Germany.
#### Training Hyperparameters
The batch size for training was 12, for testing 2, with four epochs. All other hyperparameters were the standard from the transformers library.
#### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
## Evaluation results
Fine-tuned on our downstream task, this model achieves the following results in a five-fold cross validation that are comparable to the performance of our expert human coders. Please refer to Erfort et al. (2023)
### BibTeX entry and citation info
```bibtex
@article{erfort_partypress_2023,
author = {Cornelius Erfort and
Lukas F. Stoetzer and
Heike Klüver},
title = {The PARTYPRESS Database: A new comparative database of parties’ press releases},
journal = {Research and Politics},
volume = {10},
number = {3},
year = {2023},
doi = {10.1177/20531680231183512},
URL = {https://doi.org/10.1177/20531680231183512}
}
```
Erfort, C., Stoetzer, L. F., & Klüver, H. (2023). The PARTYPRESS Database: A new comparative database of parties’ press releases. Research & Politics, 10(3). [https://doi.org/10.1177/20531680231183512](https://doi.org/10.1177/20531680231183512)
### Further resources
Github: [cornelius-erfort/partypress](https://github.com/cornelius-erfort/partypress)
Research and Politics Dataverse: [Replication Data for: The PARTYPRESS Database: A New Comparative Database of Parties’ Press Releases](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi%3A10.7910%2FDVN%2FOINX7Q)
## Acknowledgements
Research for this contribution is part of the Cluster of Excellence "Contestations of the Liberal Script" (EXC 2055, Project-ID: 390715649), funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy. Cornelius Erfort is moreover grateful for generous funding provided by the DFG through the Research Training Group DYNAMICS (GRK 2458/1).
## Contact
Cornelius Erfort
Humboldt-Universität zu Berlin
[corneliuserfort.de](corneliuserfort.de)
|
partypress/partypress-monolingual-sweden | partypress | 2023-11-09T11:09:28Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"partypress",
"political science",
"parties",
"press releases",
"sv",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-05-29T14:26:00Z | ---
license: cc-by-sa-4.0
language:
- sv
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- partypress
- political science
- parties
- press releases
widget:
- text: 'Idag har Liberalerna presenterat en budgetsatsning ur partiets vårbudgetmotion där man avsätter 150 miljoner kronor för en ordningskommission.– En skola som präglas av stök och oordning leder till sämre skolresultat. Ordningsproblemen i svensk skola är stora och allvarliga. Studieron minskar, anmälningarna till Skolinspektionen ökar och det allvarliga våldet eskalerar. En ordningskommission ska dels komma med förslag på lagändringar och sprida goda exempel, men även dela ut pengar till skolor som vill jobba förebyggande mot våld och stök, säger Christer Nylander, LiberalernasSource: Pressmeddelanden'
---
# PARTYPRESS monolingual Sweden
Fine-tuned model, based on [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased). Used in [Erfort et al. (2023)](https://doi.org/10.1177/20531680231183512), building on the PARTYPRESS database. For the downstream task of classyfing press releases from political parties into 23 unique policy areas we achieve a performance comparable to expert human coders.
## Model description
The PARTYPRESS monolingual model builds on [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) but has a supervised component. This means, it was fine-tuned using texts labeled by humans. The labels indicate 23 different political issue categories derived from the Comparative Agendas Project (CAP):
| Code | Issue |
|--|-------|
| 1 | Macroeconomics |
| 2 | Civil Rights |
| 3 | Health |
| 4 | Agriculture |
| 5 | Labor |
| 6 | Education |
| 7 | Environment |
| 8 | Energy |
| 9 | Immigration |
| 10 | Transportation |
| 12 | Law and Crime |
| 13 | Social Welfare |
| 14 | Housing |
| 15 | Domestic Commerce |
| 16 | Defense |
| 17 | Technology |
| 18 | Foreign Trade |
| 19.1 | International Affairs |
| 19.2 | European Union |
| 20 | Government Operations |
| 23 | Culture |
| 98 | Non-thematic |
| 99 | Other |
## Model variations
There are several monolingual models for different countries, and a multilingual model. The multilingual model can be easily extended to other languages, country contexts, or time periods by fine-tuning it with minimal additional labeled texts.
## Intended uses & limitations
The main use of the model is for text classification of press releases from political parties. It may also be useful for other political texts.
The classification can then be used to measure which issues parties are discussing in their communication.
### How to use
This model can be used directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}
>>> partypress = pipeline("text-classification", model = "cornelius/partypress-monolingual-sweden", tokenizer = "cornelius/partypress-monolingual-sweden", **tokenizer_kwargs)
>>> partypress("Your text here.")
```
### Limitations and bias
The model was trained with data from parties in Sweden. For use in other countries, the model may be further fine-tuned. Without further fine-tuning, the performance of the model may be lower.
The model may have biased predictions. We discuss some biases by country, party, and over time in the release paper for the PARTYPRESS database. For example, the performance is highest for press releases from Ireland (75%) and lowest for Poland (55%).
## Training data
The PARTYPRESS multilingual model was fine-tuned with about 3,000 press releases from parties in Sweden. The press releases were labeled by two expert human coders.
For the training data of the underlying model, please refer to [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased)
## Training procedure
### Preprocessing
For the preprocessing, please refer to [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased)
### Pretraining
For the pretraining, please refer to [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased)
### Fine-tuning
We fine-tuned the model using about 3,000 labeled press releases from political parties in Sweden.
#### Training Hyperparameters
The batch size for training was 12, for testing 2, with four epochs. All other hyperparameters were the standard from the transformers library.
#### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
## Evaluation results
Fine-tuned on our downstream task, this model achieves the following results in a five-fold cross validation that are comparable to the performance of our expert human coders. Please refer to Erfort et al. (2023)
### BibTeX entry and citation info
### BibTeX entry and citation info
```bibtex
@article{erfort_partypress_2023,
author = {Cornelius Erfort and
Lukas F. Stoetzer and
Heike Klüver},
title = {The PARTYPRESS Database: A new comparative database of parties’ press releases},
journal = {Research and Politics},
volume = {10},
number = {3},
year = {2023},
doi = {10.1177/20531680231183512},
URL = {https://doi.org/10.1177/20531680231183512}
}
```
Erfort, C., Stoetzer, L. F., & Klüver, H. (2023). The PARTYPRESS Database: A new comparative database of parties’ press releases. Research & Politics, 10(3). [https://doi.org/10.1177/20531680231183512](https://doi.org/10.1177/20531680231183512)
### Further resources
Github: [cornelius-erfort/partypress](https://github.com/cornelius-erfort/partypress)
Research and Politics Dataverse: [Replication Data for: The PARTYPRESS Database: A New Comparative Database of Parties’ Press Releases](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi%3A10.7910%2FDVN%2FOINX7Q)
## Acknowledgements
Research for this contribution is part of the Cluster of Excellence "Contestations of the Liberal Script" (EXC 2055, Project-ID: 390715649), funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy. Cornelius Erfort is moreover grateful for generous funding provided by the DFG through the Research Training Group DYNAMICS (GRK 2458/1).
## Contact
Cornelius Erfort
Humboldt-Universität zu Berlin
[corneliuserfort.de](corneliuserfort.de)
|
rizkyjun/bloom-1b-finetuned-aings-adapters-delimiter-2 | rizkyjun | 2023-11-09T11:08:28Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-1b1",
"base_model:adapter:bigscience/bloom-1b1",
"region:us"
]
| null | 2023-11-08T22:56:21Z | ---
library_name: peft
base_model: bigscience/bloom-1b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0.dev0
|
retmago/modelperson | retmago | 2023-11-09T11:08:22Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-09T10:41:59Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of retmago people
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - retmago/modelperson
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of retmago people using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
partypress/partypress-monolingual-uk | partypress | 2023-11-09T11:08:17Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"partypress",
"political science",
"parties",
"press releases",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-05-29T16:21:47Z | ---
license: cc-by-sa-4.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- partypress
- political science
- parties
- press releases
widget:
- text: 'Farmers who applied for a Force Majeure when their businesses wereimpacted by severe flooding and landslides on 22 and 23 August 2017 cannow apply for the one-off financial payment.“The extreme flooding event meant that the farming and wider rural communities in the North West experienced significant hardship.Farm businesses lost income due to the impact on their land and thecost of removing debris and silt, as well as reseeding to restore itback to productive use,” said Minister Poots.“So I am delighted to say that this North West 2017 Flooding Income Support Scheme, worth almost £2.7million, is now open to applications. This is a time limited scheme which will close on 12 August 2021. “The one-off grant payment, which will be capped at £106,323 per farm business, is available for farmers who applied for a Force Majeure in respect of the flooding incident.“I would urge all eligible businesses to make sure their application is submitted as soon as possible,” Minister Poots added.Eligible farm businesses will receive a letter inviting them to applyfor the support package, with instructions on how to access theapplication form and receive help to complete it.They must complete the application form available on DAERA OnlineServices from 28 July 2021. Explanatory information and guidance willalso be published on the DAERA website.Further information on the scheme can be found on the DAERA website www.daera-ni.gov.uk'
---
# PARTYPRESS monolingual UK
Fine-tuned model, based on [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english). Used in [Erfort et al. (2023)](https://doi.org/10.1177/20531680231183512), building on the PARTYPRESS database. For the downstream task of classyfing press releases from political parties into 23 unique policy areas we achieve a performance comparable to expert human coders.
## Model description
The PARTYPRESS monolingual model builds on [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) but has a supervised component. This means, it was fine-tuned using texts labeled by humans. The labels indicate 23 different political issue categories derived from the Comparative Agendas Project (CAP):
| Code | Issue |
|--|-------|
| 1 | Macroeconomics |
| 2 | Civil Rights |
| 3 | Health |
| 4 | Agriculture |
| 5 | Labor |
| 6 | Education |
| 7 | Environment |
| 8 | Energy |
| 9 | Immigration |
| 10 | Transportation |
| 12 | Law and Crime |
| 13 | Social Welfare |
| 14 | Housing |
| 15 | Domestic Commerce |
| 16 | Defense |
| 17 | Technology |
| 18 | Foreign Trade |
| 19.1 | International Affairs |
| 19.2 | European Union |
| 20 | Government Operations |
| 23 | Culture |
| 98 | Non-thematic |
| 99 | Other |
## Model variations
There are several monolingual models for different countries, and a multilingual model. The multilingual model can be easily extended to other languages, country contexts, or time periods by fine-tuning it with minimal additional labeled texts.
## Intended uses & limitations
The main use of the model is for text classification of press releases from political parties. It may also be useful for other political texts.
The classification can then be used to measure which issues parties are discussing in their communication.
### How to use
This model can be used directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}
>>> partypress = pipeline("text-classification", model = "cornelius/partypress-monolingual-uk", tokenizer = "cornelius/partypress-monolingual-uk", **tokenizer_kwargs)
>>> partypress("Your text here.")
```
### Limitations and bias
The model was trained with data from parties in the UK. For use in other countries, the model may be further fine-tuned. Without further fine-tuning, the performance of the model may be lower.
The model may have biased predictions. We discuss some biases by country, party, and over time in the release paper for the PARTYPRESS database. For example, the performance is highest for press releases from UK (75%) and lowest for Poland (55%).
## Training data
The PARTYPRESS multilingual model was fine-tuned with about 3,000 press releases from parties in the UK. The press releases were labeled by two expert human coders.
For the training data of the underlying model, please refer to [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
## Training procedure
### Preprocessing
For the preprocessing, please refer to [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
### Pretraining
For the pretraining, please refer to [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
### Fine-tuning
We fine-tuned the model using about 3,000 labeled press releases from political parties in the UK.
#### Training Hyperparameters
The batch size for training was 12, for testing 2, with four epochs. All other hyperparameters were the standard from the transformers library.
#### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
## Evaluation results
Fine-tuned on our downstream task, this model achieves the following results in a five-fold cross validation that are comparable to the performance of our expert human coders. Please refer to Erfort et al. (2023)
### BibTeX entry and citation info
```bibtex
@article{erfort_partypress_2023,
author = {Cornelius Erfort and
Lukas F. Stoetzer and
Heike Klüver},
title = {The PARTYPRESS Database: A new comparative database of parties’ press releases},
journal = {Research and Politics},
volume = {10},
number = {3},
year = {2023},
doi = {10.1177/20531680231183512},
URL = {https://doi.org/10.1177/20531680231183512}
}
```
Erfort, C., Stoetzer, L. F., & Klüver, H. (2023). The PARTYPRESS Database: A new comparative database of parties’ press releases. Research & Politics, 10(3). [https://doi.org/10.1177/20531680231183512](https://doi.org/10.1177/20531680231183512)
### Further resources
Github: [cornelius-erfort/partypress](https://github.com/cornelius-erfort/partypress)
Research and Politics Dataverse: [Replication Data for: The PARTYPRESS Database: A New Comparative Database of Parties’ Press Releases](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi%3A10.7910%2FDVN%2FOINX7Q)
## Acknowledgements
Research for this contribution is part of the Cluster of Excellence "Contestations of the Liberal Script" (EXC 2055, Project-ID: 390715649), funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy. Cornelius Erfort is moreover grateful for generous funding provided by the DFG through the Research Training Group DYNAMICS (GRK 2458/1).
## Contact
Cornelius Erfort
Humboldt-Universität zu Berlin
[corneliuserfort.de](corneliuserfort.de)
|
Baptiste-Rdt/Taxi-v3 | Baptiste-Rdt | 2023-11-09T11:07:42Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T11:07:40Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Baptiste-Rdt/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
furusu/LCM-Acertainty | furusu | 2023-11-09T11:07:28Z | 9 | 5 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"license:mit",
"diffusers:LatentConsistencyModelPipeline",
"region:us"
]
| text-to-image | 2023-11-04T04:55:55Z | ---
license: mit
tags:
- stable-diffusion
---
[ACertainty](https://huggingface.co/JosephusCheung/ACertainty)を[Latent Consistency Model](https://latent-consistency-models.github.io/)の手法で蒸留して4~8ステップほどで生成できるようにしました。性能はまだまだという感じです。
# 学習
rank=128(conv rank=32)のLoRAをバッチサイズ16で学習率5e-4で20000ステップ学習しました。公開したモデルはLoRAをマージ済みです。guidance_scaleは7.0固定で、学習対象になっていないのでguidance_scaleを変えても効果ありません。emaのrateは0.999です。学習中に無条件生成ではなくnegative_promptを使っています。
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("furusu/LCM-Acertainty", custom_pipeline="latent_consistency_txt2img", custom_revision="main")
pipe.to(torch_device="cuda", torch_dtype=torch.float16)
prompt = "anime, masterpiece, best quality, 1girl, solo, blush, sitting, twintails, blonde hair, bowtie, school uniforme, nature"
num_inference_steps =4
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=5.0, lcm_origin_steps=50, height=768, width=768, output_type="pil").images
images[0].save("./aaaaa.png")
```

|
onangeko/q-FrozenLake-v1-4x4-noSlippery | onangeko | 2023-11-09T11:01:15Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T11:01:13Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="onangeko/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
odunola/bert-base-uncased-ag-news-finetuned-2 | odunola | 2023-11-09T10:56:57Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"base_model:odunola/bert-base-uncased-ag-news-finetuned",
"base_model:finetune:odunola/bert-base-uncased-ag-news-finetuned",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-08T20:08:30Z | ---
license: apache-2.0
base_model: odunola/bert-base-uncased-ag-news-finetuned
tags:
- generated_from_trainer
datasets:
- ag_news
metrics:
- accuracy
model-index:
- name: bert-base-uncased-ag-news-finetuned-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ag_news
type: ag_news
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9819166666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ag-news-finetuned-2
This model is a fine-tuned version of [odunola/bert-base-uncased-ag-news-finetuned](https://huggingface.co/odunola/bert-base-uncased-ag-news-finetuned) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0712
- Accuracy: 0.9819
- F1(weighted): 0.9819
- Precision(weighted): 0.9819
- Recall(weighted): 0.9819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1(weighted) | Precision(weighted) | Recall(weighted) |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------------:|:-------------------:|:----------------:|
| 0.1006 | 1.0 | 6000 | 0.0712 | 0.9819 | 0.9819 | 0.9819 | 0.9819 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Baptiste-Rdt/q-FrozenLake-v1-4x4-noSlippery | Baptiste-Rdt | 2023-11-09T10:53:40Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T10:53:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Baptiste-Rdt/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
elnasharomar2/distilhubert-finetuned-oknashar | elnasharomar2 | 2023-11-09T10:51:54Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-11-09T05:20:39Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3034
- eval_runtime: 73.5325
- eval_samples_per_second: 1.36
- eval_steps_per_second: 0.177
- epoch: 1.0
- step: 113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
just097/roberta-base-lora-comma-placement-r-16-alpha-32 | just097 | 2023-11-09T10:48:38Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"region:us"
]
| null | 2023-11-09T10:48:36Z | ---
library_name: peft
base_model: roberta-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0
|
mzbac/mistral-grammar | mzbac | 2023-11-09T10:43:55Z | 93 | 21 | null | [
"gguf",
"endpoints_compatible",
"region:us"
]
| null | 2023-10-17T15:34:10Z | Prompt template:
```
[INST] Corrects and rephrase user text grammar errors delimited by triple backticks to standard English.
### Input: Text=\`\`\`she no went to market\`\`\` [/INST]
[INST] ### Output: She didn’t go the market. [/INST]
[INST] ### Input: Text=\`\`\`${input}\`\`\` [/INST]
[INST] ### Output:
```
To use the fine-tuned model, you need to get a downloadable link for the model from HF and update it accordingly in the shell script.
For example:
```
MODEL_URL="https://huggingface.co/mzbac/mistral-grammar/resolve/main/Mistral-7B-Grammar-Correaction-v1.1.3.Q5_K_M.gguf"
MODEL_FILE="Mistral-7B-Grammar-Correaction-v1.1.3.Q5_K_M.gguf"
```
For details on how to use the model with Mac Automator, please check out the article [here](https://medium.com/@anchen.li/replace-grammarly-with-open-source-llm-e1751ad6cad2). |
TheBritishLibrary/bl-books-genre | TheBritishLibrary | 2023-11-09T10:43:25Z | 33 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"genre",
"books",
"library",
"historic",
"glam ",
"lam",
"multilingual",
"en",
"ru",
"fr",
"es",
"de",
"nl",
"it",
"sv",
"da",
"hu",
"pl",
"la",
"el",
"cs",
"pt",
"fi",
"sr",
"bg",
"is",
"ga",
"he",
"nn",
"lt",
"sl",
"kw",
"ro",
"sk",
"sco",
"sa",
"dataset:TheBritishLibrary/blbooksgenre",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:04Z | ---
language:
- multilingual
- en
- ru
- fr
- es
- de
- nl
- it
- sv
- da
- hu
- pl
- la
- el
- cs
- pt
- fi
- sr
- bg
- is
- ga
- he
- nn
- lt
- sl
- kw
- ro
- sk
- sco
- sa
tags:
- genre
- books
- library
- historic
- 'glam '
- lam
license: mit
metrics:
- f1
widget:
- text: >-
Poems on various subjects. Whereto is prefixed a short essay on the
structure of English verse
- text: >-
Two Centuries of Soho: its institutions, firms, and amusements. By the
Clergy of St. Anne's, Soho, J. H. Cardwell ... H. B. Freeman ... G. C.
Wilton ... assisted by other contributors, etc
- text: The Adventures of Oliver Twist. [With plates.]
datasets:
- TheBritishLibrary/blbooksgenre
---
# British Library Books Genre Detector
**Note** this model card is a work in progress.
## Model description
This fine-tuned [`distilbert-base-cased`](https://huggingface.co/distilbert-base-cased) model is trained to predict whether a book from the [British Library's](https://www.bl.uk/) [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection is `fiction` or `non-fiction` based on the title of the book.
## Intended uses & limitations
This model was trained on data created from the [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes) largely from the 19th Century. This dataset is dominated by English language books but also includes books in a number of other languages in much smaller numbers. Whilst a subset of this data has metadata relating to Genre, the majority of this dataset does not currently contain this information.
This model was originally developed for use as part of the [Living with Machines](https://livingwithmachines.ac.uk/) project in order to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was `fiction` or `non-fiction`.
Particular areas where the model might be limited are:
### Title format
The model's training data (discussed more below) primarily consists of 19th Century book titles that have been catalogued according to British Library cataloguing practices. Since the approaches taken to cataloguing will vary across institutions running the model on titles from a different catalogue might introduce domain drift and lead to degraded model performance.
To give an example of the types of titles includes in the training data here are 20 random examples:
- 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]
- 'A new musical Interlude, called the Election [By M. P. Andrews.]',
- 'An Elegy written among the ruins of an Abbey. By the author of the Nun [E. Jerningham]',
- "The Baron's Daughter. A ballad by the author of Poetical Recreations [i.e. William C. Hazlitt] . F.P",
- 'A Little Book of Verse, etc',
- 'The Autumn Leaf Poems',
- 'The Battle of Waterloo, a poem',
- 'Maximilian, and other poems, etc',
- 'Fabellæ mostellariæ: or Devonshire and Wiltshire stories in verse; including specimens of the Devonshire dialect',
- 'The Grave of a Hamlet and other poems, chiefly of the Hebrides ... Selected, with an introduction, by his son J. Hogben']
### Date
The model was trained on data that spans the collection period of the [Digitised printed books (18th-19th century)](https://www.bl.uk/collection-guides/digitised-printed-books) book collection. This dataset covers a broad period (from 1500-1900). However, this dataset is skewed towards later years. The subset of training data i.e. data with genre annotations used to train this model has the following distribution for dates:
| | Date |
|-------|------------|
| mean | 1864.83 |
| std | 43.0199 |
| min | 1540 |
| 25% | 1847 |
| 50% | 1877 |
| 75% | 1893 |
### Language
Whilst the model is multilingual in so far as it has training data in non-English book titles, these appear much less frequently. An overview of the original training data's language counts are as follows:
| Language | Count |
|---------------------|-------|
| English | 22987 |
| Russian | 461 |
| French | 424 |
| Spanish | 366 |
| German | 347 |
| Dutch | 310 |
| Italian | 212 |
| Swedish | 186 |
| Danish | 164 |
| Hungarian | 132 |
| Polish | 112 |
| Latin | 83 |
| Greek,Modern(1453-) | 42 |
| Czech | 25 |
| Portuguese | 24 |
| Finnish | 14 |
| Serbian | 10 |
| Bulgarian | 7 |
| Icelandic | 4 |
| Irish | 4 |
| Hebrew | 2 |
| NorwegianNynorsk | 2 |
| Lithuanian | 2 |
| Slovenian | 2 |
| Cornish | 1 |
| Romanian | 1 |
| Slovak | 1 |
| Scots | 1 |
| Sanskrit | 1 |
#### How to use
There are a few different ways to use the model. To run the model locally the easiest option is to use the 🤗 Transformers [`pipelines`](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("davanstrien/bl-books-genre")
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/bl-books-genre")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("Oliver Twist")
```
This will return a dictionary with our predicted label and score
```
[{'label': 'Fiction', 'score': 0.9980145692825317}]
```
If you intend to use this model beyond initial experimentation, it is highly recommended to create some data to validate the model's predictions. As the model was trained on a specific corpus of books titles, it is also likely to be beneficial to fine-tune the model if you want to run it across a collection of book titles that differ from those in the training corpus.
## Training data
The training data was created using the [Zooniverse platform](zooniverse.org/) and the annotations were done by cataloguers from the [British Library](https://www.bl.uk/). [Snorkel](https://github.com/snorkel-team/snorkel) was used to expand on this original training data through various labelling functions. As a result, some of the labels are *not* generated by a human. More information on the process of creating the annotations can be found [here](https://github.com/Living-with-machines/genre-classification)
## Training procedure
The model was trained using the [`blurr`](https://github.com/ohmeow/blurr) library. A notebook showing the training process can be found in [Predicting Genre with Machine Learning](https://github.com/Living-with-machines/genre-classification).
## Eval results
The results of the model on a held-out training set are:
```
precision recall f1-score support
Fiction 0.88 0.97 0.92 296
Non-Fiction 0.98 0.93 0.95 554
accuracy 0.94 850
macro avg 0.93 0.95 0.94 850
weighted avg 0.95 0.94 0.94 850
```
As discussed briefly in the bias and limitation sections of the model these results should be treated with caution. ** |
razat-ag/emb-java | razat-ag | 2023-11-09T10:41:22Z | 0 | 0 | null | [
"dataset:razat-ag/embold_hf_java",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-09T10:40:04Z | ---
license: apache-2.0
datasets:
- razat-ag/embold_hf_java
--- |
ESGBERT/EnvRoBERTa-base | ESGBERT | 2023-11-09T10:36:53Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ESG",
"environmental",
"en",
"dataset:ESGBERT/environment_data",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-11-02T12:41:51Z | ---
language: en
license: apache-2.0
datasets:
- ESGBERT/environment_data
tags:
- ESG
- environmental
---
# Model Card for EnvRoBERTa-base
## Model Description
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the EnvRoBERTa-base language model. A language model that is trained to better understand environmental texts in the ESG domain.
*Note: We generally recommend choosing the [EnvironmentalBERT-base](https://huggingface.co/ESGBERT/EnvironmentalBERT-base) model since it is quicker, less resource-intensive and only marginally worse in performance.*
Using the [RoBERTa](https://huggingface.co/roberta-base) model as a starting point, the EnvRoBERTa-base Language Model is additionally pre-trained on a text corpus comprising environmental-related annual reports, sustainability reports, and corporate and general news.
## More details can be found in the paper
```bibtex
@article{Schimanski23ESGBERT,
title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
year={2023},
journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
```
|
Brololo/DRLexo2 | Brololo | 2023-11-09T10:36:50Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-11-09T10:36:29Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Brololo/DRLexo2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ESGBERT/GovRoBERTa-base | ESGBERT | 2023-11-09T10:34:33Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ESG",
"governance",
"en",
"dataset:ESGBERT/governance_data",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-11-02T14:00:24Z | ---
language: en
license: apache-2.0
datasets:
- ESGBERT/governance_data
tags:
- ESG
- governance
---
# Model Card for GovRoBERTa-base
## Model Description
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the GovRoBERTa-base language model. A language model that is trained to better understand governance texts in the ESG domain.
*Note: We generally recommend choosing the [GovernanceBERT-base](https://huggingface.co/ESGBERT/GovernanceBERT-base) model since it is quicker, less resource-intensive and only marginally worse in performance.*
Using the [RoBERTa](https://huggingface.co/roberta-base) model as a starting point, the GovRoBERTa-base Language Model is additionally pre-trained on a text corpus comprising governance-related annual reports, sustainability reports, and corporate and general news.
## More details can be found in the paper
```bibtex
@article{Schimanski23ESGBERT,
title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
year={2023},
journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
```
|
ESGBERT/SocRoBERTa-base | ESGBERT | 2023-11-09T10:33:55Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ESG",
"social",
"en",
"dataset:ESGBERT/social_data",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-11-02T13:56:32Z | ---
language: en
license: apache-2.0
datasets:
- ESGBERT/social_data
tags:
- ESG
- social
---
# Model Card for SocRoBERTa-base
## Model Description
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the SocRoBERTa-base language model. A language model that is trained to better understand social texts in the ESG domain.
*Note: We generally recommend choosing the [SocialBERT-base](https://huggingface.co/ESGBERT/SocialBERT-base) model since it is quicker, less resource-intensive and only marginally worse in performance.*
Using the [RoBERTa](https://huggingface.co/roberta-base) model as a starting point, the SocRoBERTa-base Language Model is additionally pre-trained on a text corpus comprising social-related annual reports, sustainability reports, and corporate and general news.
## More details can be found in the paper
```bibtex
@article{Schimanski23ESGBERT,
title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
year={2023},
journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
``` |
ESGBERT/EnvironmentalBERT-base | ESGBERT | 2023-11-09T10:32:37Z | 44 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ESG",
"environmental",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-11-04T10:50:40Z | ---
language: en
license: apache-2.0
tags:
- ESG
- environmental
---
# Model Card for EnvironmentalBERT-base
## Model Description
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the EnvironmentalBERT-base language model. A language model that is trained to better understand environmental texts in the ESG domain.
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as a starting point, the EnvironmentalBERT-base Language Model is additionally pre-trained on a text corpus comprising environmental-related annual reports, sustainability reports, and corporate and general news.
## More details can be found in the paper
```bibtex
@article{Schimanski23ESGBERT,
title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
year={2023},
journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
```
|
ESGBERT/SocialBERT-base | ESGBERT | 2023-11-09T10:29:41Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ESG",
"social",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-11-04T11:00:20Z | ---
language: en
license: apache-2.0
tags:
- ESG
- social
---
# Model Card for SocialBERT-base
## Model Description
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the SocialBERT-base language model. A language model that is trained to better understand social texts in the ESG domain.
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as a starting point, the SocialBERT-base Language Model is additionally pre-trained on a text corpus comprising social-related annual reports, sustainability reports, and corporate and general news.
## More details can be found in the paper
```bibtex
@article{Schimanski23ESGBERT,
title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
year={2023},
journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
``` |
nicotaroni/sentiment_analysis_all_mpnet | nicotaroni | 2023-11-09T10:14:31Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-11-09T10:13:51Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nicotaroni/sentiment_analysis_all_mpnet
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nicotaroni/sentiment_analysis_all_mpnet")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
MoxoffAdmin/Mistral-Ita-4bit | MoxoffAdmin | 2023-11-09T10:12:25Z | 2 | 0 | null | [
"gguf",
"text-generation-inference",
"it",
"endpoints_compatible",
"region:us"
]
| null | 2023-11-09T09:59:50Z | ---
language:
- it
tags:
- text-generation-inference
--- |
NicholasGri/ppo-SnowballTarget | NicholasGri | 2023-11-09T10:08:30Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-11-09T09:57:27Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
|
fulouma/MyLoRAs | fulouma | 2023-11-09T09:50:47Z | 0 | 3 | null | [
"license:unknown",
"region:us"
]
| null | 2023-03-22T16:28:40Z | ---
license: unknown
---
Trigger word for LoRA on folder `concept`: cic
everything else: sls
note:
- unsuffixed LoRA is usually trained 10 epoch
- some of those need LoCon extension to work. |
krushal/layoutlmv3-finetuned-invoice | krushal | 2023-11-09T09:49:13Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:generated",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-11-09T07:50:11Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- generated
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-invoice
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: generated
type: generated
config: sroie
split: test
args: sroie
metrics:
- name: Precision
type: precision
value: 0.9979716024340771
- name: Recall
type: recall
value: 0.9979716024340771
- name: F1
type: f1
value: 0.9979716024340771
- name: Accuracy
type: accuracy
value: 0.9997893406361913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the generated dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0019
- Precision: 0.9980
- Recall: 0.9980
- F1: 0.9980
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 100 | 0.1069 | 0.946 | 0.9594 | 0.9527 | 0.9943 |
| No log | 4.0 | 200 | 0.0229 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 6.0 | 300 | 0.0158 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 8.0 | 400 | 0.0113 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1416 | 10.0 | 500 | 0.0103 | 0.9800 | 0.9919 | 0.9859 | 0.9979 |
| 0.1416 | 12.0 | 600 | 0.0047 | 0.9980 | 0.9959 | 0.9970 | 0.9996 |
| 0.1416 | 14.0 | 700 | 0.0035 | 0.9939 | 0.9959 | 0.9949 | 0.9994 |
| 0.1416 | 16.0 | 800 | 0.0044 | 0.9980 | 0.9959 | 0.9970 | 0.9996 |
| 0.1416 | 18.0 | 900 | 0.0027 | 0.9980 | 0.9959 | 0.9970 | 0.9996 |
| 0.0049 | 20.0 | 1000 | 0.0019 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0049 | 22.0 | 1100 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0049 | 24.0 | 1200 | 0.0041 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0049 | 26.0 | 1300 | 0.0033 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0049 | 28.0 | 1400 | 0.0029 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0029 | 30.0 | 1500 | 0.0018 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0029 | 32.0 | 1600 | 0.0019 | 0.9960 | 0.9980 | 0.9970 | 0.9996 |
| 0.0029 | 34.0 | 1700 | 0.0016 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0029 | 36.0 | 1800 | 0.0017 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0029 | 38.0 | 1900 | 0.0018 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0019 | 40.0 | 2000 | 0.0014 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
brianmg/mg_data_finetuning | brianmg | 2023-11-09T09:49:01Z | 35 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-10-27T14:59:35Z | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: mg_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mg_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
NghiemAbe/mbart_EnglistToVietnamese | NghiemAbe | 2023-11-09T09:38:13Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"vi",
"en",
"dataset:NghiemAbe/translation-vietnamese-english",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-11-09T06:28:24Z | ---
license: mit
base_model: facebook/mbart-large-50
tags:
- generated_from_trainer
model-index:
- name: mbart_EnglistToVietnamese
results: []
language:
- vi
- en
metrics:
- bleu
pipeline_tag: text2text-generation
datasets:
- NghiemAbe/translation-vietnamese-english
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_EnglistToVietnamese
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 1250 | 1.2577 | 35.2468 | 49.35 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1 |
Acadys/PointCon-Vigogne70B | Acadys | 2023-11-09T09:37:15Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:bofenghuang/vigogne-2-70b-chat",
"base_model:finetune:bofenghuang/vigogne-2-70b-chat",
"license:llama2",
"region:us"
]
| null | 2023-11-09T09:06:12Z | ---
license: llama2
base_model: bofenghuang/vigogne-2-70b-chat
tags:
- generated_from_trainer
model-index:
- name: PointCon-vigogne-2-70b-chat-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PointCon-vigogne-2-70b-chat-3
This model is a fine-tuned version of [bofenghuang/vigogne-2-70b-chat](https://huggingface.co/bofenghuang/vigogne-2-70b-chat) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7526
- eval_runtime: 116.2987
- eval_samples_per_second: 0.361
- eval_steps_per_second: 0.361
- epoch: 1.2
- step: 150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
dhanush23/speecht5_finetuned_sadness_emotion | dhanush23 | 2023-11-09T09:33:41Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-11-08T08:03:13Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_sadness_emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_sadness_emotion
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3717
- eval_runtime: 34.0863
- eval_samples_per_second: 16.165
- eval_steps_per_second: 2.024
- epoch: 43.48
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
dhanush23/speecht5_finetuned_fear_emotion | dhanush23 | 2023-11-09T09:33:30Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-11-08T07:11:09Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_fear_emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_fear_emotion
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3791
- eval_runtime: 31.2521
- eval_samples_per_second: 17.631
- eval_steps_per_second: 2.208
- epoch: 43.48
- step: 3000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
YuHannn/fine_tuning_roberta_model | YuHannn | 2023-11-09T09:29:17Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-08T03:21:52Z | ---
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- generated_from_trainer
model-index:
- name: fine_tuning_roberta_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuning_roberta_model
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1741
- Rmse: 0.6882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5085 | 1.0 | 5 | 0.6268 | 0.7255 |
| 0.2562 | 2.0 | 10 | 0.6686 | 0.7255 |
| 0.1496 | 3.0 | 15 | 0.6989 | 0.5620 |
| 0.0934 | 4.0 | 20 | 1.0044 | 0.6882 |
| 0.1224 | 5.0 | 25 | 1.1798 | 0.7255 |
| 0.0561 | 6.0 | 30 | 1.1906 | 0.6882 |
| 0.0207 | 7.0 | 35 | 1.1774 | 0.6882 |
| 0.0417 | 8.0 | 40 | 1.1551 | 0.6882 |
| 0.0131 | 9.0 | 45 | 1.1628 | 0.6882 |
| 0.0134 | 10.0 | 50 | 1.1741 | 0.6882 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cpu
- Datasets 2.14.6
- Tokenizers 0.14.1
|
elemosynov/a2c-PandaReachDense-v3 | elemosynov | 2023-11-09T09:25:53Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-11-09T09:20:08Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.24 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-GGUF | afrideva | 2023-11-09T09:19:20Z | 52 | 0 | null | [
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"pt",
"en",
"license:mit",
"region:us"
]
| text-generation | 2023-11-09T09:11:32Z | ---
base_model: cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2
inference: false
language:
- pt
- en
license: mit
model_creator: cnmoro
model_name: TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-GGUF
Quantized GGUF model files for [TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2) from [cnmoro](https://huggingface.co/cnmoro)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q2_k.gguf) | q2_k | 482.14 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q3_k_m.gguf) | q3_k_m | 549.85 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q6_k.gguf) | q6_k | 903.41 MB |
| [tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-GGUF/resolve/main/tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
Finetuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, on a Portuguese instruct dataset, using axolotl.
This is a work in progress, final version will be v3 or v4.
Prompt format:
f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\n" |
notChocoMilk/Proto_AD_Version | notChocoMilk | 2023-11-09T09:18:50Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2023-11-09T09:03:48Z | ---
license: other
license_name: idfk
license_link: LICENSE
---
|
sam-babayev/sf_large_all | sam-babayev | 2023-11-09T09:17:08Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-11-09T09:16:37Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# jamesgpt1/sf_large_all
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jamesgpt1/sf_large_all")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
LarryAIDraw/YorForger_SxF | LarryAIDraw | 2023-11-09T09:03:16Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-11-09T08:58:11Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/193646/yor-forger-spy-x-family |
LarryAIDraw/Darkness | LarryAIDraw | 2023-11-09T09:01:57Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-11-09T08:55:23Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/194427/darkness-konosuba |
azg-azg/SQLBert | azg-azg | 2023-11-09T09:01:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"fill-mask",
"sq",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-11-02T13:32:50Z | ---
license: apache-2.0
language:
- sq
pipeline_tag: fill-mask
--- |
LarryAIDraw/skirk_640_0.35_128_any3_networks.lora_lora | LarryAIDraw | 2023-11-09T09:01:41Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-11-09T08:54:56Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/188615/pre-release-version-skirk-tast-genshin-impact |
LarryAIDraw/Persona5FutabaSakura | LarryAIDraw | 2023-11-09T09:01:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-11-09T08:54:32Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/189952/not-so-perfect-futaba-sakura-from-persona-5 |
Subsets and Splits