modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sleepynlp/dqn-SpaceInvadersNoFrameskip-v4-leo
|
sleepynlp
| 2023-06-29T04:39:10Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T04:38:32Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 604.50 +/- 141.16
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sleepynlp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sleepynlp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sleepynlp
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Nikit2580/Doremon
|
Nikit2580
| 2023-06-29T04:32:26Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-06-29T04:32:26Z |
---
license: bigcode-openrail-m
---
|
ericNguyen0132/roberta-base-Dep
|
ericNguyen0132
| 2023-06-29T04:16:29Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T02:58:19Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-Dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-Dep
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6762 | 1.0 | 751 | 0.5790 |
| 0.6394 | 2.0 | 1502 | 0.5337 |
| 0.4529 | 3.0 | 2253 | 0.6777 |
| 0.3988 | 4.0 | 3004 | 0.6316 |
| 0.4044 | 5.0 | 3755 | 0.8178 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
QuangHuy54/roberta-small-base-squad
|
QuangHuy54
| 2023-06-29T04:12:13Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-29T02:21:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-small-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-small-base-squad
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1483 | 1.0 | 4928 | 1.0453 |
| 0.9092 | 2.0 | 9856 | 0.9868 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dalonsoherrera/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
|
dalonsoherrera
| 2023-06-29T04:05:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T03:29:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0841
- F1: 0.5553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1321 | 1.0 | 383 | 1.0811 | 0.5100 |
| 0.9297 | 2.0 | 766 | 1.0495 | 0.5279 |
| 0.8338 | 3.0 | 1149 | 1.0507 | 0.5510 |
| 0.7366 | 4.0 | 1532 | 1.0841 | 0.5553 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DataHammer/scidpr-ctx-encoder
|
DataHammer
| 2023-06-29T03:59:55Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"dpr",
"sentence-similarity",
"en",
"dataset:allenai/qasper",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-29T02:58:43Z |
---
license: apache-2.0
datasets:
- allenai/qasper
language:
- en
library_name: transformers
pipeline_tag: sentence-similarity
---
# SciDPR Context Encoder
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. scidpr-ctx-encoder is the Context Encoder trained using the Scientific Question Answer (QA) dataset (Pradeep et al., 2021).
- **Developed by:** See [GitHub repo](https://github.com/gmftbyGMFTBY/science-llm) for model developers
- **Model type:** BERT-based encoder
- **Language(s) (NLP):** [Apache 2.0](https://github.com/gmftbyGMFTBY/science-llm/blob/main/LICENSE)
- **License:** English
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [Github Repo](https://github.com/gmftbyGMFTBY/science-llm)
- **Paper [optional]:** [Paper Repo]()
|
DataHammer/scidpr-question-encoder
|
DataHammer
| 2023-06-29T03:59:35Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:allenai/qasper",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-29T02:58:25Z |
---
datasets:
- allenai/qasper
language:
- en
library_name: transformers
pipeline_tag: sentence-similarity
license: apache-2.0
---
# SciDPR Question Encoder
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. scidpr-question-encoder is the Question Encoder trained using the Scientific Question Answer (QA) dataset (Pradeep et al., 2021).
- **Developed by:** See [GitHub repo](https://github.com/gmftbyGMFTBY/science-llm) for model developers
- **Model type:** BERT-based encoder
- **Language(s) (NLP):** [Apache 2.0](https://github.com/gmftbyGMFTBY/science-llm/blob/main/LICENSE)
- **License:** English
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [Github Repo](https://github.com/gmftbyGMFTBY/science-llm)
- **Paper [optional]:** [Paper Repo]()
|
chaowu/a2c-AntBulletEnv-v0
|
chaowu
| 2023-06-29T03:45:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T00:50:29Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1660.92 +/- 74.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
beamandym/bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos
|
beamandym
| 2023-06-29T03:43:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T01:25:16Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0717
- F1: 0.5857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9243 | 1.0 | 766 | 1.0143 | 0.5370 |
| 0.8299 | 2.0 | 1532 | 0.9847 | 0.5773 |
| 0.6513 | 3.0 | 2298 | 1.0717 | 0.5857 |
| 0.4954 | 4.0 | 3064 | 1.2263 | 0.5773 |
| 0.3879 | 5.0 | 3830 | 1.3412 | 0.5795 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ibrahim-Alam/finetuning-xlnet-base-cased-on-tweet_sentiment_binary
|
Ibrahim-Alam
| 2023-06-29T03:29:07Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T03:12:36Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-xlnet-base-cased-on-tweet_sentiment_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-xlnet-base-cased-on-tweet_sentiment_binary
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2407
- Accuracy: 0.9334
- F1: 0.9369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
carissa1/SimplyLaw_Legal_Questions
|
carissa1
| 2023-06-29T03:18:09Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"license:mit",
"region:us"
] |
question-answering
| 2023-06-27T17:26:28Z |
---
license: mit
pipeline_tag: question-answering
library_name: adapter-transformers
---
|
t3PbMvBN6SXv/ppo-PyramidsRND
|
t3PbMvBN6SXv
| 2023-06-29T03:07:04Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-29T03:06:09Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: t3PbMvBN6SXv/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TheFools/Onickayes
|
TheFools
| 2023-06-29T02:58:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T14:04:27Z |
---
license: creativeml-openrail-m
---
|
t3PbMvBN6SXv/ppo-SnowballTarget
|
t3PbMvBN6SXv
| 2023-06-29T02:22:43Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-29T02:22:37Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: t3PbMvBN6SXv/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-V3
|
hw2942
| 2023-06-29T02:15:02Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T01:43:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-V3
results: []
widget:
- text:A股创业板六年新高;纳指跌落高位,标普又新高,创史上第二大中概IPO和今年美股最大IPO的滴滴首日冲高回落,市值破800亿美元,叮咚买菜次日涨逾60%;美元逾两月新高,金银铜6月大跌,原油半年涨超50%。\n中国6月官方制造业PMI为50.9,价格指数从高位回落。\n央行等六部门:充分发挥信贷等金融子市场合力,增强政策的针对性和可操作性。\n人社部 “十四五” 发展规划要求,基本养老保险参保率达95%,城镇新增就业逾5000万人。\n沪深交所7月19日起下调基金交易经手费收费标准。\n奈雪的茶赴港上市首日破发,收盘大跌14%,市值跌破300亿港元。\n港股上市倒计时,小鹏汽车定价165港元/股。\n格力2020股东会通过员工持股计划等议案,董明珠称接班人不是我说你行就行,是你能行才行。\n美国6月小非农ADP新增就业高于预期,绝对值较5月有所回落。\n美联储逆回购用量史上首次逼近1万亿美元。\n媒体称拜登最早下周颁布新行政令,限制多个行业的寡头垄断。\n亚马逊称FTC新任主席有偏见,寻求其回避反垄断调查。\n散户最爱平台Robinhood遭FINRA创纪录罚款7000万美元,被指坑害百万客户。
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-V3
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6427
- Accuracy: 0.6154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 38 | 0.6960 | 0.5 |
| No log | 2.0 | 76 | 0.7015 | 0.5 |
| No log | 3.0 | 114 | 0.8248 | 0.5 |
| No log | 4.0 | 152 | 0.6956 | 0.5 |
| No log | 5.0 | 190 | 0.6886 | 0.5 |
| No log | 6.0 | 228 | 0.7065 | 0.5 |
| No log | 7.0 | 266 | 0.7070 | 0.5 |
| No log | 8.0 | 304 | 0.7395 | 0.5385 |
| No log | 9.0 | 342 | 0.6871 | 0.6538 |
| No log | 10.0 | 380 | 0.6427 | 0.6154 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vg055/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
vg055
| 2023-06-29T01:46:57Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T23:13:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0243
- F1: 0.5454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0835 | 1.0 | 766 | 1.1722 | 0.4787 |
| 0.9631 | 2.0 | 1532 | 1.0243 | 0.5454 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hoaio/Reinforce-Cartpole-v1
|
hoaio
| 2023-06-29T01:46:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T01:46:18Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 468.80 +/- 37.79
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SandyDelMar/bert-base-spanish-wwm-cased-finetuned-MeIA-AnalisisDeSentimientos
|
SandyDelMar
| 2023-06-29T01:26:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T20:25:23Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1296
- F1: 0.5826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9559 | 1.0 | 766 | 0.9486 | 0.5698 |
| 0.8016 | 2.0 | 1532 | 0.9745 | 0.5788 |
| 0.6378 | 3.0 | 2298 | 1.1296 | 0.5826 |
| 0.4044 | 4.0 | 3064 | 1.3569 | 0.5786 |
| 0.2667 | 5.0 | 3830 | 1.6739 | 0.5744 |
| 0.1763 | 6.0 | 4596 | 2.1382 | 0.5614 |
| 0.0653 | 7.0 | 5362 | 2.5472 | 0.5730 |
| 0.068 | 8.0 | 6128 | 2.9414 | 0.5748 |
| 0.0317 | 9.0 | 6894 | 3.1639 | 0.5742 |
| 0.015 | 10.0 | 7660 | 3.2025 | 0.5767 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
osunlp/attrscore-flan-t5-xxl
|
osunlp
| 2023-06-29T01:25:48Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:osunlp/AttrScore",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T23:42:05Z |
---
license: apache-2.0
datasets:
- osunlp/AttrScore
---
AttributionScore model fine-tuned from FLAN-t5-xxl on the combined repurposed data from https://huggingface.co/datasets/osunlp/AttrScore
|
osunlp/attrscore-alpaca-13b
|
osunlp
| 2023-06-29T01:25:13Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:osunlp/AttrScore",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T15:10:28Z |
---
datasets:
- osunlp/AttrScore
---
|
osunlp/attrscore-vicuna-13b
|
osunlp
| 2023-06-29T01:24:59Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:osunlp/AttrScore",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T14:34:31Z |
---
datasets:
- osunlp/AttrScore
---
|
Laurie/baichuan7b-lora-merged
|
Laurie
| 2023-06-29T01:24:44Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"text2text-generation",
"custom_code",
"zh",
"en",
"dataset:Laurie/alpaca_chinese_dataset",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T10:13:53Z |
---
license: apache-2.0
datasets:
- Laurie/alpaca_chinese_dataset
language:
- zh
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text2text-generation
---
## Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
tokenizer = AutoTokenizer.from_pretrained("baichuan7b-lora-merged", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan7b-lora-merged", device_map="auto", trust_remote_code=True,torch_dtype=torch.float16)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
while True:
query = input('请输入问题:')
if len(query.strip()) == 0:
break
inputs = tokenizer(["<human>:{}\n<bot>:".format(query)], return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
osunlp/attrscore-alpaca-7b
|
osunlp
| 2023-06-29T01:24:29Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:osunlp/AttrScore",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T15:00:41Z |
---
datasets:
- osunlp/AttrScore
---
|
osunlp/attrscore-flan-t5-xl
|
osunlp
| 2023-06-29T01:23:57Z | 142 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:osunlp/AttrScore",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T22:57:32Z |
---
license: apache-2.0
datasets:
- osunlp/AttrScore
---
AttributionScore model fine-tuned from FLAN-t5-xl on the combined repurposed data from https://huggingface.co/datasets/osunlp/AttrScore
|
YakovElm/Qt_5_BERT_Over_Sampling
|
YakovElm
| 2023-06-29T01:12:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T01:00:48Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_5_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_5_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0627
- Train Accuracy: 0.9744
- Validation Loss: 0.4859
- Validation Accuracy: 0.8913
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5160 | 0.7339 | 0.2827 | 0.9124 | 0 |
| 0.1566 | 0.9430 | 0.5148 | 0.8532 | 1 |
| 0.0627 | 0.9744 | 0.4859 | 0.8913 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yhna/ppo-SnowballTarget
|
yhna
| 2023-06-29T01:06:39Z | 55 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-29T01:06:35Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yhna/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TalesLF/ppo-Huggy
|
TalesLF
| 2023-06-29T00:42:23Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T00:42:18Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: TalesLF/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pcapp/Reinforce-CartPole
|
pcapp
| 2023-06-29T00:33:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T00:33:10Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
t3PbMvBN6SXv/Reinforce-CartPole-v1
|
t3PbMvBN6SXv
| 2023-06-29T00:18:45Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T03:20:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 430.30 +/- 74.79
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TheBloke/GPlatty-30B-GGML
|
TheBloke
| 2023-06-29T00:01:02Z | 0 | 4 | null |
[
"arxiv:2302.13971",
"license:other",
"region:us"
] | null | 2023-06-28T22:48:27Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Lilloukas' GPlatty 30B GGML
These files are GGML format model files for [Lilloukas' GPlatty 30B](https://huggingface.co/lilloukas/GPlatty-30B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPlatty-30B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/GPlatty-30B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lilloukas/GPlatty-30B)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| gplatty-30b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| gplatty-30b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| gplatty-30b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| gplatty-30b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| gplatty-30b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
| gplatty-30b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| gplatty-30b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| gplatty-30b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| gplatty-30b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| gplatty-30b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| gplatty-30b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| gplatty-30b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| gplatty-30b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| gplatty-30b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m gplatty-30b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Lilloukas' GPlatty 30B
# Information
GPlatty-30B is a merge of [lilloukas/Platypus-30B](https://huggingface.co/lilloukas/Platypus-30B) and [chansung/gpt4-alpaca-lora-30b](https://huggingface.co/chansung/gpt4-alpaca-lora-30b)
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 63.6 |
| ARC (25-shot) | 66.0 |
| HellaSwag (10-shot) | 84.8 |
| TruthfulQA (0-shot) | 53.8 |
| Avg. | 67.0 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above.
## Model Details
* **Trained by**: Cole Hunter & Ariel Lee
* **Model type:** **GPlatty-30B** is an auto-regressive language model based on the LLaMA transformer architecture.
* **Language(s)**: English
* **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
| Hyperparameter | Value |
|---------------------------|-------|
| \\(n_\text{parameters}\\) | 33B |
| \\(d_\text{model}\\) | 6656 |
| \\(n_\text{layers}\\) | 60 |
| \\(n_\text{heads}\\) | 52 |
## Reproducing Evaluation Results
Install LM Evaluation Harness:
```
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
Each task was evaluated on a single A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda
```
## Limitations and bias
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
## Citations
```bibtex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
```
|
sxandie/NER2.0.2-BIOESdataset
|
sxandie
| 2023-06-29T00:00:38Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-27T23:56:38Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: sxandie/NER2.0.2-BIOESdataset
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sxandie/NER2.0.2-BIOESdataset
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1285
- Validation Loss: 0.2118
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3920 | 0.2882 | 0 |
| 0.2349 | 0.2590 | 1 |
| 0.1824 | 0.2267 | 2 |
| 0.1492 | 0.2197 | 3 |
| 0.1285 | 0.2118 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.2.2
- Tokenizers 0.13.3
|
Sam12111/va
|
Sam12111
| 2023-06-28T23:56:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T23:55:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: va
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# va
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LanguageMachines/stable-diffusion-2-1
|
LanguageMachines
| 2023-06-28T23:54:34Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T20:36:45Z |
---
license: openrail++
tags:
- stable-diffusion
- text-to-image
pinned: true
duplicated_from: stabilityai/stable-diffusion-2-1
---
# Stable Diffusion v2-1 Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-1` model is fine-tuned from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) (`768-v-ema.ckpt`) with an additional 55k steps on the same dataset (with `punsafe=0.1`), and then fine-tuned for another 155k extra steps with `punsafe=0.98`.
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `v2-1_768-ema-pruned.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt).
- Use it with 🧨 [`diffusers`](#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
```python
import torch
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
model_id = "stabilityai/stable-diffusion-2-1"
# Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
**Training Procedure**
Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through the OpenCLIP-ViT/H text-encoder.
- The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512.
We currently provide the following checkpoints:
- `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`.
850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`.
- `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset.
- `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized.
- `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://huggingface.co/runwayml/stable-diffusion-inpainting).
- `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:

Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
anas21/English4SpeechToTextModel
|
anas21
| 2023-06-28T23:52:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T18:02:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: English4SpeechToTextModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# English4SpeechToTextModel
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 15
- eval_batch_size: 8
- seed: 8
- gradient_accumulation_steps: 15
- total_train_batch_size: 225
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
allenchienxxx/q-FrozenLake-v1-4x4-noSlippery
|
allenchienxxx
| 2023-06-28T23:48:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T23:48:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="allenchienxxx/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Mozzipa/qlora-koalpaca-polyglot-12.8b-50step
|
Mozzipa
| 2023-06-28T23:43:40Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T23:43:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
YakovElm/MariaDB_20_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T22:32:16Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T22:29:47Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_20_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_20_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0346
- Train Accuracy: 0.9902
- Validation Loss: 0.2078
- Validation Accuracy: 0.9698
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4262 | 0.7938 | 0.1451 | 0.9598 | 0 |
| 0.1324 | 0.9535 | 0.1764 | 0.9698 | 1 |
| 0.0346 | 0.9902 | 0.2078 | 0.9698 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MazVer/SweetHan
|
MazVer
| 2023-06-28T22:31:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T22:27:51Z |
---
license: creativeml-openrail-m
---
|
SandyDelMar/bert-base-spanish-wwm-uncased-finetuned-MeIA-AnalisisDeSentimientos
|
SandyDelMar
| 2023-06-28T22:28:40Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T21:45:10Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-spanish-wwm-uncased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-uncased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0555
- F1: 0.5846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.978 | 1.0 | 766 | 0.9611 | 0.5685 |
| 0.8063 | 2.0 | 1532 | 0.9708 | 0.5824 |
| 0.5953 | 3.0 | 2298 | 1.0555 | 0.5846 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
eskalofi/liziwess
|
eskalofi
| 2023-06-28T22:16:30Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T22:10:11Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### liziwess Dreambooth model trained by eskalofi with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
michaelfeil/ct2fast-mpt-30b
|
michaelfeil
| 2023-06-28T22:14:21Z | 8 | 3 |
transformers
|
[
"transformers",
"mpt",
"text-generation",
"ctranslate2",
"int8",
"float16",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:allenai/c4",
"dataset:mc4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack-dedup",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-06-23T15:55:16Z |
---
license: apache-2.0
tags:
- ctranslate2
- int8
- float16
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- allenai/c4
- mc4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack-dedup
- allenai/s2orc
inference: false
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [mosaicml/mpt-30b](https://huggingface.co/mosaicml/mpt-30b)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-mpt-30b"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-23 using
```
ct2-transformers-converter --model mosaicml/mpt-30b --output_dir ~/tmp-ct2fast-mpt-30b --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# MPT-30B
MPT-30B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-30B is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
MPT-30B comes with special features that differentiate it from other LLMs, including an 8k token context window (which can be further extended via finetuning; see [MPT-7B-StoryWriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)), support for context-length extrapolation via [ALiBi](https://arxiv.org/abs/2108.12409), and efficient inference + training via FlashAttention. It also has strong coding abilities thanks to its pretraining mix. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
The size of MPT-30B was also specifically chosen to make it easy to deploy on a single GPU—either 1xA100-80GB in 16-bit precision or 1xA100-40GB in 8-bit precision.
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-30B is:
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-30B:
The following models are finetuned on MPT-30B:
* [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for short-form instruction following.
Built by finetuning MPT-30B on several carefully curated datasets.
* License: _CC-By-NC-SA-3.0_
* [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
## Model Date
June 22, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 4096 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 |
| c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 |
| The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 |
| RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 |
| Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 |
| RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 |
| RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)).
### Training Configuration
The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform):
(i) First it was trained on 440 A100-40GBs with a batch size of 1760.
(ii) Then, on 216 A100-40GBs with a batch size of 1728.
(iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens.
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
michaelfeil/ct2fast-mpt-30b-instruct
|
michaelfeil
| 2023-06-28T22:13:29Z | 5 | 4 |
transformers
|
[
"transformers",
"mpt",
"text-generation",
"ctranslate2",
"int8",
"float16",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-06-23T07:22:01Z |
---
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- ctranslate2
- int8
- float16
- Composer
- MosaicML
- llm-foundry
inference: false
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [mosaicml/mpt-30b-instruct](https://huggingface.co/mosaicml/mpt-30b-instruct)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-mpt-30b-instruct"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-23 using
```
ct2-transformers-converter --model mosaicml/mpt-30b-instruct --output_dir ~/tmp-ct2fast-mpt-30b-instruct --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# MPT-30B-Instruct
MPT-30B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Bespokenizer46**
> I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
> Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
> End the email with a friendly inquiry about Phyllis's family.
**MPT-30B-Instruct**:
> Phyllis -
> I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
> LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
> They also provide tools to easily connect to and use the model in your daily workflow.
> I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
> Also, I know it's been a tough year for your family, how are things?
> Best,
> Your Friend
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-instruct',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted as follows:
```python
def format_prompt(instruction):
template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
return template.format(instruction=instruction)
example = "Tell me a funny joke.\nDon't make it too funny though."
fmt_ex = format_prompt(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| competition_math | 1.6 M | 3.01% |
| cot_gsm8k | 3.36 M | 6.32% |
| dialogsum | 0.1 M | 0.19% |
| dolly_hhrlhf | 5.89 M | 11.07% |
| duorc | 8.2 M | 15.51% |
| qasper | 10.97 M | 20.63% |
| quality | 11.31 M | 21.28% |
| scrolls/summ_screen_fd | 11.56 M | 21.82% |
| spider | 0.089 M | 0.16% |
## PreTraining Data
For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
michaelfeil/ct2fast-mpt-30b-chat
|
michaelfeil
| 2023-06-28T22:09:57Z | 8 | 2 |
transformers
|
[
"transformers",
"mpt",
"text-generation",
"ctranslate2",
"int8",
"float16",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-06-23T18:02:28Z |
---
license: cc-by-nc-sa-4.0
datasets:
- camel-ai/code
- ehartford/wizard_vicuna_70k_unfiltered
- anon8231489123/ShareGPT_Vicuna_unfiltered
- teknium1/GPTeacher/roleplay-instruct-v2-final
- teknium1/GPTeacher/codegen-isntruct
- timdettmers/openassistant-guanaco
- camel-ai/math
- project-baize/baize-chatbot/medical_chat_data
- project-baize/baize-chatbot/quora_chat_data
- project-baize/baize-chatbot/stackoverflow_chat_data
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/ai_society
- jondurbin/airoboros-gpt4-1.2
- LongConversations
- camel-ai/physics
tags:
- ctranslate2
- int8
- float16
- Composer
- MosaicML
- llm-foundry
inference: false
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [mosaicml/mpt-30b-chat](https://huggingface.co/mosaicml/mpt-30b-chat)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-mpt-30b-chat"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-23 using
```
ct2-transformers-converter --model mosaicml/mpt-30b-chat --output_dir ~/tmp-ct2fast-mpt-30b-chat --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# MPT-30B-Chat
MPT-30B-Chat is a chatbot-like model for dialogue generation.
It was built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
_CC-By-NC-SA-4.0_ (non-commercial use only)
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Dialogue
**ksreenivasan**:
> I want you to come up with a tweet based on this summary of the article:
>
> We are excited to extend the MosaicML Foundation Series with MPT-30B, a new commercially licensed open-source model that is significantly more powerful, and outperforms the
> original GPT-3. In addition, we are releasing two fine-tuned models, MPT-30B-Instruct and MPT-30B-Chat, that are built on top of MPT-30B and excel at short-form instruction
> following and multi-turn conversations, respectively.
> All MPT-30B models come with special features that differentiate them from other LLMs, including an 8k token context window (which can be further extended via finetuning;
> see MPT-7B-StoryWriter), support for context-length extrapolation via ALiBi, and efficient inference + training performance via FlashAttention. It also has strong coding
> abilities thanks to its pretraining data mixture.
**MPT-30B-Chat**:
> "Revolutionize your AI capabilities with MosaicML's latest addition, MPT-30B! Outperforming GPT-3, this open-source model is significantly more powerful and comes with
> finely-tuned models for exceptional instruction following and multi-turn conversations. Unleash the potential of an 8k token context window, ALiBi support, and FlashAttention.
> Experience the future of AI with MosaicML! #AI #ML #innovation"
**ksreenivasan**:
> Is there anything you want to say to MosaicML about this exciting news?
**MPT-30B-Chat**:
> As your AI marketing assistant, I would say that this is truly exciting news for the AI community! MosaicML's latest addition, MPT-30B, is poised to revolutionize the industry with its powerful open-source capabilities and finely-tuned models for instruction following and multi-turn conversations.
> The special features, including the 8k token context window, ALiBi support, and FlashAttention, are sure to set MosaicML's MPT-30B apart from other LLMs.
> Congratulations to the MosaicML team on this incredible achievement!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-chat',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 4096 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| Airoboros/GPT4 | 26.4M | 1.71% |
| Baize | 55.0M | 3.57% |
| Camel | 301M | 19.54% |
| GPTeacher | 7.56M | 0.49% |
| Guanaco | 15.6M | 1.02% |
| LongCoversations | 18.4M | 1.19% |
| ShareGPT | 821M | 53.24% |
| WizardLM | 297M | 19.23% |
"LongConversations" is a GPT3.5/4-generated dataset, details of which will be released at a later date.
### Training Configuration
This model was trained on 64 H100s for about 7.6 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Chat was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
LanguageMachines/blip2-flan-t5-xl
|
LanguageMachines
| 2023-06-28T22:04:12Z | 142 | 0 |
transformers
|
[
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"arxiv:2210.11416",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-06-28T21:51:38Z |
---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
inference: false
duplicated_from: Salesforce/blip2-flan-t5-xl
---
# BLIP-2, Flan T5-xl, pre-trained only
BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, Blip2ForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
|
alitair/Taxi-v3
|
alitair
| 2023-06-28T21:55:46Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T21:55:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="alitair/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fvonhoven/ControlNet-v1-1
|
fvonhoven
| 2023-06-28T21:53:39Z | 0 | 0 | null |
[
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2023-06-28T21:51:50Z |
---
license: openrail
duplicated_from: lllyasviel/ControlNet-v1-1
---
This is the model files for [ControlNet 1.1](https://github.com/lllyasviel/ControlNet-v1-1-nightly).
This model card will be filled in a more detailed way after 1.1 is officially merged into ControlNet.
|
WALIDALI/bekimajic
|
WALIDALI
| 2023-06-28T21:46:07Z | 32 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T21:34:07Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bekimajic Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
coreml-community/coreml-realisticVision-v20_cn
|
coreml-community
| 2023-06-28T21:43:14Z | 0 | 2 | null |
[
"coreml",
"stable-diffusion",
"text-to-image",
"not-for-all-eyes",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-28T16:31:40Z |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
- not-for-all-eyes
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).
- This model can be used with ControlNet
# realisticVision-v20_cn:
Source(s): [Hugging Face](https://huggingface.co/SG161222/Realistic_Vision_V2.0) - [CivitAI](https://civitai.com/models/4201/realistic-vision-v20)
**Please read this!**
My model has always been free and always will be free. There are no restrictions on the use of the model. The rights to this model still belong to me.
This model is available on Mage.Space, Sinkin.ai, GetImg.ai and RandomSeed.co (NSFW content)
You can find out news about this model and future models, as well as support me on Boosty.
Recommended for use with [VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse-original) which has already been baked into the converted `CoreML` model version here.
I use this template to get good generation results:
**Prompt**: RAW photo, subject, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
**Example**: RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
**Negative Prompt**: (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck
OR
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation
`Euler A` or `DPM++ 2M Karras` with 25 steps
`CFG Scale` 7
`Hires Fix` with `Latent` upscaler
0 `Hires Steps` and `Denoising Strength` 0.25 - 0.45
`Upscaling` by 1.1 - 2.0 <br><br>




|
gbellamy/poca-SoccerTwos
|
gbellamy
| 2023-06-28T21:38:59Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-28T21:38:47Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gbellamy/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lapix/segformer-b3-finetuned-ccagt-400-300
|
lapix
| 2023-06-28T21:19:58Z | 162 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"dataset:lapix/CCAgT",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-09-12T13:20:00Z |
---
license: other
tags:
- vision
- image-segmentation
task_ids:
- semantic-segmentation
datasets:
- lapix/CCAgT
widget:
- src: https://huggingface.co/lapix/segformer-b3-finetuned-ccagt-400-300/resolve/main/sampleA.png
example_title: Sample A
- src: https://huggingface.co/lapix/segformer-b3-finetuned-ccagt-400-300/resolve/main/sampleB.png
example_title: Sample B
---
# SegFormer (b3-sized) model fine-tuned on CCAgT dataset
SegFormer model fine-tuned on CCAgT dataset at resolution 400x300. It was introduced in the paper [Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the {AgNOR} Technique](https://doi.org/10.2139/ssrn.4126881) by [J. G. A. Amorim](https://huggingface.co/johnnv) et al.
This model was trained in a subset of [CCAgT dataset](https://huggingface.co/datasets/lapix/CCAgT/), so perform a evaluation of this model on the dataset available at HF will differ from the results presented in the paper. For more information about how the model was trained, read the paper.
Disclaimer: This model card has been written based on the SegFormer [model card](https://huggingface.co/nvidia/mit-b3/blob/main/README.md) by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes.
## Intended uses & limitations
You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to segment an image of the CCAgT dataset:
```python
from transformers import AutoFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/lapix/segformer-b3-finetuned-ccagt-400-300/resolve/main/sampleB.png"
image = Image.open(requests.get(url, stream=True).raw))
model = SegformerForSemanticSegmentation.from_pretrained("lapix/segformer-b3-finetuned-ccagt-400-300")
feature_extractor = AutoFeatureExtractor.from_pretrained("lapix/segformer-b3-finetuned-ccagt-400-300")
pixel_values = feature_extractor(images=image, return_tensors="pt")
outputs = model(pixel_values=pixel_values)
logits = outputs.logits
# Rescale logits to original image size (400, 300)
upsampled_logits = nn.functional.interpolate(
logits,
size=img.size[::-1], # (height, width)
mode="bilinear",
align_corners=False,
)
segmentation_mask = upsampled_logits.argmax(dim=1)[0]
print("Predicted mask:", segmentation_mask)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{AtkinsonSegmentationAgNORSSRN2022,
author= {Jo{\~{a}}o Gustavo Atkinson Amorim and Andr{\'{e}} Vict{\'{o}}ria Matias and Allan Cerentini and Fabiana Botelho de Miranda Onofre and Alexandre Sherlley Casimiro Onofre and Aldo von Wangenheim},
doi = {10.2139/ssrn.4126881},
url = {https://doi.org/10.2139/ssrn.4126881},
year = {2022},
publisher = {Elsevier {BV}},
title = {Semantic Segmentation for the Detection of Very Small Objects on Cervical Cell Samples Stained with the {AgNOR} Technique},
journal = {{SSRN} Electronic Journal}
}
```
|
prognosis/falcon7b-cardio-disease-qa-v1
|
prognosis
| 2023-06-28T21:15:59Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-28T09:16:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon7b-cardio-disease-qa-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7b-cardio-disease-qa-v1
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ROBERTO1900/logos
|
ROBERTO1900
| 2023-06-28T21:09:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T21:09:20Z |
---
license: creativeml-openrail-m
---
|
cleanrl/Pusher-v4-ddpg_continuous_action-seed1
|
cleanrl
| 2023-06-28T21:08:17Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Pusher-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T21:08:02Z |
---
tags:
- Pusher-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pusher-v4
type: Pusher-v4
metrics:
- type: mean_reward
value: -30.52 +/- 2.85
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Pusher-v4**
This is a trained model of a DDPG agent playing Pusher-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Pusher-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Pusher-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'env_id': 'Pusher-v4',
'exp_name': 'ddpg_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
NeoCodes-dev/ppo-SnowballTarget
|
NeoCodes-dev
| 2023-06-28T21:03:00Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-28T21:02:56Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dergky1/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/bert-dp-4
|
NasimB
| 2023-06-28T21:01:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:generator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-26T01:24:27Z |
---
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-dp-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-dp-4
This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 180
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 6.3492 | 1.89 | 1000 | 5.9327 |
| 5.8333 | 3.78 | 2000 | 5.8515 |
| 5.7604 | 5.67 | 3000 | 5.8483 |
| 5.7137 | 7.56 | 4000 | 5.7914 |
| 5.6597 | 9.45 | 5000 | 5.7672 |
| 5.6213 | 11.34 | 6000 | 5.7594 |
| 5.5798 | 13.23 | 7000 | 5.7352 |
| 5.5482 | 15.12 | 8000 | 5.7275 |
| 5.513 | 17.01 | 9000 | 5.7203 |
| 5.485 | 18.9 | 10000 | 5.7211 |
| 5.4498 | 20.79 | 11000 | 5.6947 |
| 5.4175 | 22.68 | 12000 | 5.6923 |
| 5.3877 | 24.57 | 13000 | 5.6879 |
| 5.3635 | 26.47 | 14000 | 5.6776 |
| 5.3389 | 28.36 | 15000 | 5.6757 |
| 5.3166 | 30.25 | 16000 | 5.6758 |
| 5.2951 | 32.14 | 17000 | 5.6676 |
| 5.2793 | 34.03 | 18000 | 5.6711 |
| 5.2684 | 35.92 | 19000 | 5.6687 |
| 5.2609 | 37.81 | 20000 | 5.6684 |
| 5.2606 | 39.7 | 21000 | 5.6719 |
| 5.2624 | 41.59 | 22000 | 5.6697 |
| 5.2551 | 43.48 | 23000 | 5.6718 |
| 5.2461 | 45.37 | 24000 | 5.6699 |
| 5.2431 | 47.26 | 25000 | 5.6692 |
| 5.2414 | 49.15 | 26000 | 5.6691 |
| 5.2856 | 51.04 | 27000 | 5.6823 |
| 5.2753 | 52.93 | 28000 | 5.6860 |
| 5.2549 | 54.82 | 29000 | 5.6877 |
| 5.2276 | 56.71 | 30000 | 5.6285 |
| 5.1674 | 58.6 | 31000 | 5.5439 |
| 5.0894 | 60.49 | 32000 | 5.4082 |
| 4.9508 | 62.38 | 33000 | 5.1598 |
| 4.7453 | 64.27 | 34000 | 4.9274 |
| 4.5898 | 66.16 | 35000 | 4.7884 |
| 4.4656 | 68.05 | 36000 | 4.6531 |
| 4.35 | 69.94 | 37000 | 4.5123 |
| 4.2378 | 71.83 | 38000 | 4.4012 |
| 4.1496 | 73.72 | 39000 | 4.3240 |
| 4.0891 | 75.61 | 40000 | 4.2763 |
| 4.0538 | 77.5 | 41000 | 4.2520 |
| 4.0448 | 79.4 | 42000 | 4.2485 |
| 3.9724 | 81.29 | 43000 | 3.9940 |
| 3.6527 | 83.18 | 44000 | 3.7442 |
| 3.4172 | 85.07 | 45000 | 3.5713 |
| 3.2446 | 86.96 | 46000 | 3.4403 |
| 3.4764 | 88.85 | 47000 | 3.3796 |
| 3.0543 | 90.74 | 48000 | 3.2884 |
| 2.9549 | 92.63 | 49000 | 3.2107 |
| 2.8785 | 94.52 | 50000 | 3.1466 |
| 2.8143 | 96.41 | 51000 | 3.0788 |
| 2.7605 | 98.3 | 52000 | 3.0230 |
| 2.7111 | 100.19 | 53000 | 2.9802 |
| 2.6727 | 102.08 | 54000 | 2.9414 |
| 2.6417 | 103.97 | 55000 | 2.9167 |
| 2.612 | 105.86 | 56000 | 2.8927 |
| 2.5918 | 107.75 | 57000 | 2.8769 |
| 2.5769 | 109.64 | 58000 | 2.8637 |
| 2.566 | 111.53 | 59000 | 2.8551 |
| 2.556 | 113.42 | 60000 | 2.8458 |
| 2.548 | 115.31 | 61000 | 2.8488 |
| 2.5468 | 117.2 | 62000 | 2.8412 |
| 2.5453 | 119.09 | 63000 | 2.8383 |
| 2.7567 | 120.98 | 64000 | 2.8857 |
| 2.6017 | 122.87 | 65000 | 2.8382 |
| 2.5416 | 124.76 | 66000 | 2.7862 |
| 2.484 | 126.65 | 67000 | 2.7415 |
| 2.4361 | 128.54 | 68000 | 2.7079 |
| 2.3925 | 130.43 | 69000 | 2.6771 |
| 2.3512 | 132.33 | 70000 | 2.6542 |
| 2.3146 | 134.22 | 71000 | 2.6327 |
| 2.2805 | 136.11 | 72000 | 2.6119 |
| 2.2494 | 138.0 | 73000 | 2.5903 |
| 2.2218 | 139.89 | 74000 | 2.5734 |
| 2.1955 | 141.78 | 75000 | 2.5584 |
| 2.1739 | 143.67 | 76000 | 2.5459 |
| 2.154 | 145.56 | 77000 | 2.5337 |
| 2.1324 | 147.45 | 78000 | 2.5260 |
| 2.1149 | 149.34 | 79000 | 2.5169 |
| 2.096 | 151.23 | 80000 | 2.5095 |
| 2.083 | 153.12 | 81000 | 2.5045 |
| 2.0666 | 155.01 | 82000 | 2.4911 |
| 2.0562 | 156.9 | 83000 | 2.4907 |
| 2.0437 | 158.79 | 84000 | 2.4808 |
| 2.0356 | 160.68 | 85000 | 2.4816 |
| 2.0317 | 162.57 | 86000 | 2.4758 |
| 2.0201 | 164.46 | 87000 | 2.4724 |
| 2.0138 | 166.35 | 88000 | 2.4723 |
| 2.0095 | 168.24 | 89000 | 2.4651 |
| 2.0056 | 170.13 | 90000 | 2.4651 |
| 2.0021 | 172.02 | 91000 | 2.4616 |
| 1.9974 | 173.91 | 92000 | 2.4611 |
| 1.9985 | 175.8 | 93000 | 2.4613 |
| 1.9954 | 177.69 | 94000 | 2.4579 |
| 1.9979 | 179.58 | 95000 | 2.4611 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mxdza/ppo-LunarLander-v2
|
mxdza
| 2023-06-28T20:58:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T20:57:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.30 +/- 20.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
InriaValda/lstm_font_seq_ordering
|
InriaValda
| 2023-06-28T20:45:27Z | 0 | 0 | null |
[
"text-classification",
"region:us"
] |
text-classification
| 2023-06-28T20:42:55Z |
---
pipeline_tag: text-classification
---
|
StefanV28/HandSign2
|
StefanV28
| 2023-06-28T20:45:21Z | 3 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-28T09:41:25Z |
---
pipeline_tag: image-classification
---
|
lunarti/ppo-Huggy
|
lunarti
| 2023-06-28T20:44:39Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-28T20:44:32Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lunarti/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
JaimeAi/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
JaimeAi
| 2023-06-28T20:41:19Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T19:53:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0298
- F1: 0.5415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.098 | 1.0 | 766 | 1.0509 | 0.5115 |
| 0.9349 | 2.0 | 1532 | 1.0298 | 0.5415 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mnaylor/mega-base-wikitext
|
mnaylor
| 2023-06-28T20:32:48Z | 1,615 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mega",
"fill-mask",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-02-21T20:56:10Z |
---
license: apache-2.0
language:
- en
library_name: transformers
---
# Mega Masked LM on wikitext-103
This is the location on the Hugging Face hub for the Mega MLM checkpoint. I trained this model on the `wikitext-103` dataset using standard
BERT-style masked LM pretraining using the [original Mega repository](https://github.com/facebookresearch/mega) and uploaded the weights
initially to hf.co/mnaylor/mega-wikitext-103. When the implementation of Mega into Hugging Face's `transformers` is finished, the weights here
are designed to be used with `MegaForMaskedLM` and are compatible with the other (encoder-based) `MegaFor*` model classes.
This model uses the RoBERTa base tokenizer since the Mega paper does not implement a specific tokenizer aside from the character-level
tokenizer used to illustrate long-sequence performance.
|
hartholt/stablelm-tuned-alpha-7b
|
hartholt
| 2023-06-28T20:31:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"en",
"dataset:dmayhem93/ChatCombined",
"dataset:tatsu-lab/alpaca",
"dataset:nomic-ai/gpt4all_prompt_generations",
"dataset:Dahoas/full-hh-rlhf",
"dataset:jeffwan/sharegpt_vicuna",
"dataset:HuggingFaceH4/databricks_dolly_15k",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T20:31:55Z |
---
language:
- en
tags:
- causal-lm
license: cc-by-nc-sa-4.0
datasets:
- dmayhem93/ChatCombined
- tatsu-lab/alpaca
- nomic-ai/gpt4all_prompt_generations
- Dahoas/full-hh-rlhf
- jeffwan/sharegpt_vicuna
- HuggingFaceH4/databricks_dolly_15k
duplicated_from: stabilityai/stablelm-tuned-alpha-7b
---
# StableLM-Tuned-Alpha
## Model Description
`StableLM-Tuned-Alpha` is a suite of 3B and 7B parameter decoder-only language models built on top of the `StableLM-Base-Alpha` models and further fine-tuned on various chat and instruction-following datasets.
## Usage
Get started chatting with `StableLM-Tuned-Alpha` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model.half().cuda()
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
stop_ids = [50278, 50279, 50277, 1, 0]
for stop_id in stop_ids:
if input_ids[0][-1] == stop_id:
return True
return False
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
prompt = f"{system_prompt}<|USER|>What's your mood today?<|ASSISTANT|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.7,
do_sample=True,
stopping_criteria=StoppingCriteriaList([StopOnTokens()])
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
StableLM Tuned should be used with prompts formatted to `<|SYSTEM|>...<|USER|>...<|ASSISTANT|>...`
The system prompt is
```
<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: StableLM-Tuned-Alpha models are auto-regressive language models based on the NeoX transformer architecture.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`StableLM-Tuned-Alpha`) are licensed under the Non-Commercial Creative Commons license ([CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)), in-line with the original non-commercial license specified by [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
* **Contact**: For questions and comments about the model, please email `[email protected]`
## Training
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|------------|-------------|--------|-------|-----------------|
| 3B | 4096 | 16 | 32 | 4096 |
| 7B | 6144 | 16 | 48 | 4096 |
### Training Dataset
`StableLM-Tuned-Alpha` models are fine-tuned on a combination of five datasets:
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine.
[GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), which consists of 400k prompts and responses generated by GPT-4;
[Anthropic HH](https://huggingface.co/datasets/Dahoas/full-hh-rlhf), made up of preferences about AI assistant helpfulness and harmlessness;
[DataBricks Dolly](https://github.com/databrickslabs/dolly), comprising 15k instruction/responses generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization;
and [ShareGPT Vicuna (English subset)](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), a dataset of conversations retrieved from [ShareGPT](https://sharegpt.com/).
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (FP16), and optimized with AdamW. We outline the following hyperparameters:
| Parameters | Batch Size | Learning Rate | Warm-up | Weight Decay | Betas |
|------------|------------|---------------|---------|--------------|-------------|
| 3B | 256 | 2e-5 | 50 | 0.01 | (0.9, 0.99) |
| 7B | 128 | 2e-5 | 100 | 0.01 | (0.9, 0.99) |
## Use and Limitations
### Intended Use
These models are intended to be used by the open-source community chat-like applications in adherence with the [CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
### Limitations and bias
Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
## Acknowledgements
This work would not have been possible without the helpful hand of Dakota Mahan ([@dmayhem93](https://huggingface.co/dmayhem93)).
## Citations
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtext
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
```
```bibtex
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```
|
LanguageMachines/blip2-opt-2.7b
|
LanguageMachines
| 2023-06-28T20:28:53Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-06-28T19:56:29Z |
---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
duplicated_from: Salesforce/blip2-opt-2.7b
---
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
|
fresha/e5-large-v2-endpoint
|
fresha
| 2023-06-28T20:21:13Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"mteb",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-28T18:23:10Z |
---
tags:
- mteb
model-index:
- name: e5-large-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.22388059701493
- type: ap
value: 43.20816505595132
- type: f1
value: 73.27811303522058
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.748325
- type: ap
value: 90.72534979701297
- type: f1
value: 93.73895874282185
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.612
- type: f1
value: 47.61157345898393
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.541999999999998
- type: map_at_10
value: 38.208
- type: map_at_100
value: 39.417
- type: map_at_1000
value: 39.428999999999995
- type: map_at_3
value: 33.95
- type: map_at_5
value: 36.329
- type: mrr_at_1
value: 23.755000000000003
- type: mrr_at_10
value: 38.288
- type: mrr_at_100
value: 39.511
- type: mrr_at_1000
value: 39.523
- type: mrr_at_3
value: 34.009
- type: mrr_at_5
value: 36.434
- type: ndcg_at_1
value: 23.541999999999998
- type: ndcg_at_10
value: 46.417
- type: ndcg_at_100
value: 51.812000000000005
- type: ndcg_at_1000
value: 52.137
- type: ndcg_at_3
value: 37.528
- type: ndcg_at_5
value: 41.81
- type: precision_at_1
value: 23.541999999999998
- type: precision_at_10
value: 7.269
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 15.979
- type: precision_at_5
value: 11.664
- type: recall_at_1
value: 23.541999999999998
- type: recall_at_10
value: 72.688
- type: recall_at_100
value: 96.871
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 47.937000000000005
- type: recall_at_5
value: 58.321
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.546499570522094
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.01607489943561
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.616107510107774
- type: mrr
value: 72.75106626214661
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.33018094733868
- type: cos_sim_spearman
value: 83.60190492611737
- type: euclidean_pearson
value: 82.1492450218961
- type: euclidean_spearman
value: 82.70308926526991
- type: manhattan_pearson
value: 81.93959600076842
- type: manhattan_spearman
value: 82.73260801016369
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.54545454545455
- type: f1
value: 84.49582530928923
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.362725540120096
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.849509608178145
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.502999999999997
- type: map_at_10
value: 43.323
- type: map_at_100
value: 44.708999999999996
- type: map_at_1000
value: 44.838
- type: map_at_3
value: 38.987
- type: map_at_5
value: 41.516999999999996
- type: mrr_at_1
value: 38.769999999999996
- type: mrr_at_10
value: 49.13
- type: mrr_at_100
value: 49.697
- type: mrr_at_1000
value: 49.741
- type: mrr_at_3
value: 45.804
- type: mrr_at_5
value: 47.842
- type: ndcg_at_1
value: 38.769999999999996
- type: ndcg_at_10
value: 50.266999999999996
- type: ndcg_at_100
value: 54.967
- type: ndcg_at_1000
value: 56.976000000000006
- type: ndcg_at_3
value: 43.823
- type: ndcg_at_5
value: 47.12
- type: precision_at_1
value: 38.769999999999996
- type: precision_at_10
value: 10.057
- type: precision_at_100
value: 1.554
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.125
- type: precision_at_5
value: 15.851
- type: recall_at_1
value: 31.502999999999997
- type: recall_at_10
value: 63.715999999999994
- type: recall_at_100
value: 83.61800000000001
- type: recall_at_1000
value: 96.63199999999999
- type: recall_at_3
value: 45.403
- type: recall_at_5
value: 54.481
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.833000000000002
- type: map_at_10
value: 37.330999999999996
- type: map_at_100
value: 38.580999999999996
- type: map_at_1000
value: 38.708
- type: map_at_3
value: 34.713
- type: map_at_5
value: 36.104
- type: mrr_at_1
value: 35.223
- type: mrr_at_10
value: 43.419000000000004
- type: mrr_at_100
value: 44.198
- type: mrr_at_1000
value: 44.249
- type: mrr_at_3
value: 41.614000000000004
- type: mrr_at_5
value: 42.553000000000004
- type: ndcg_at_1
value: 35.223
- type: ndcg_at_10
value: 42.687999999999995
- type: ndcg_at_100
value: 47.447
- type: ndcg_at_1000
value: 49.701
- type: ndcg_at_3
value: 39.162
- type: ndcg_at_5
value: 40.557
- type: precision_at_1
value: 35.223
- type: precision_at_10
value: 7.962
- type: precision_at_100
value: 1.304
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.184999999999999
- type: recall_at_1
value: 27.833000000000002
- type: recall_at_10
value: 51.881
- type: recall_at_100
value: 72.04
- type: recall_at_1000
value: 86.644
- type: recall_at_3
value: 40.778
- type: recall_at_5
value: 45.176
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.175
- type: map_at_10
value: 51.174
- type: map_at_100
value: 52.26499999999999
- type: map_at_1000
value: 52.315999999999995
- type: map_at_3
value: 47.897
- type: map_at_5
value: 49.703
- type: mrr_at_1
value: 43.448
- type: mrr_at_10
value: 54.505
- type: mrr_at_100
value: 55.216
- type: mrr_at_1000
value: 55.242000000000004
- type: mrr_at_3
value: 51.98500000000001
- type: mrr_at_5
value: 53.434000000000005
- type: ndcg_at_1
value: 43.448
- type: ndcg_at_10
value: 57.282
- type: ndcg_at_100
value: 61.537
- type: ndcg_at_1000
value: 62.546
- type: ndcg_at_3
value: 51.73799999999999
- type: ndcg_at_5
value: 54.324
- type: precision_at_1
value: 43.448
- type: precision_at_10
value: 9.292
- type: precision_at_100
value: 1.233
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.218
- type: precision_at_5
value: 15.887
- type: recall_at_1
value: 38.175
- type: recall_at_10
value: 72.00999999999999
- type: recall_at_100
value: 90.155
- type: recall_at_1000
value: 97.257
- type: recall_at_3
value: 57.133
- type: recall_at_5
value: 63.424
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.405
- type: map_at_10
value: 30.043
- type: map_at_100
value: 31.191000000000003
- type: map_at_1000
value: 31.275
- type: map_at_3
value: 27.034000000000002
- type: map_at_5
value: 28.688000000000002
- type: mrr_at_1
value: 24.068
- type: mrr_at_10
value: 31.993
- type: mrr_at_100
value: 32.992
- type: mrr_at_1000
value: 33.050000000000004
- type: mrr_at_3
value: 28.964000000000002
- type: mrr_at_5
value: 30.653000000000002
- type: ndcg_at_1
value: 24.068
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 40.709
- type: ndcg_at_1000
value: 42.855
- type: ndcg_at_3
value: 29.139
- type: ndcg_at_5
value: 32.045
- type: precision_at_1
value: 24.068
- type: precision_at_10
value: 5.65
- type: precision_at_100
value: 0.885
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 12.279
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 22.405
- type: recall_at_10
value: 49.391
- type: recall_at_100
value: 74.53699999999999
- type: recall_at_1000
value: 90.605
- type: recall_at_3
value: 33.126
- type: recall_at_5
value: 40.073
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.309999999999999
- type: map_at_10
value: 20.688000000000002
- type: map_at_100
value: 22.022
- type: map_at_1000
value: 22.152
- type: map_at_3
value: 17.954
- type: map_at_5
value: 19.439
- type: mrr_at_1
value: 16.294
- type: mrr_at_10
value: 24.479
- type: mrr_at_100
value: 25.515
- type: mrr_at_1000
value: 25.593
- type: mrr_at_3
value: 21.642
- type: mrr_at_5
value: 23.189999999999998
- type: ndcg_at_1
value: 16.294
- type: ndcg_at_10
value: 25.833000000000002
- type: ndcg_at_100
value: 32.074999999999996
- type: ndcg_at_1000
value: 35.083
- type: ndcg_at_3
value: 20.493
- type: ndcg_at_5
value: 22.949
- type: precision_at_1
value: 16.294
- type: precision_at_10
value: 5.112
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.587000000000001
- type: recall_at_1
value: 13.309999999999999
- type: recall_at_10
value: 37.851
- type: recall_at_100
value: 64.835
- type: recall_at_1000
value: 86.334
- type: recall_at_3
value: 23.493
- type: recall_at_5
value: 29.528
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.857999999999997
- type: map_at_10
value: 35.503
- type: map_at_100
value: 36.957
- type: map_at_1000
value: 37.065
- type: map_at_3
value: 32.275999999999996
- type: map_at_5
value: 34.119
- type: mrr_at_1
value: 31.954
- type: mrr_at_10
value: 40.851
- type: mrr_at_100
value: 41.863
- type: mrr_at_1000
value: 41.900999999999996
- type: mrr_at_3
value: 38.129999999999995
- type: mrr_at_5
value: 39.737
- type: ndcg_at_1
value: 31.954
- type: ndcg_at_10
value: 41.343999999999994
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 49.501
- type: ndcg_at_3
value: 36.047000000000004
- type: ndcg_at_5
value: 38.639
- type: precision_at_1
value: 31.954
- type: precision_at_10
value: 7.68
- type: precision_at_100
value: 1.247
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 17.132
- type: precision_at_5
value: 12.589
- type: recall_at_1
value: 25.857999999999997
- type: recall_at_10
value: 53.43599999999999
- type: recall_at_100
value: 78.82400000000001
- type: recall_at_1000
value: 92.78999999999999
- type: recall_at_3
value: 38.655
- type: recall_at_5
value: 45.216
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.709
- type: map_at_10
value: 34.318
- type: map_at_100
value: 35.657
- type: map_at_1000
value: 35.783
- type: map_at_3
value: 31.326999999999998
- type: map_at_5
value: 33.021
- type: mrr_at_1
value: 30.137000000000004
- type: mrr_at_10
value: 39.093
- type: mrr_at_100
value: 39.992
- type: mrr_at_1000
value: 40.056999999999995
- type: mrr_at_3
value: 36.606
- type: mrr_at_5
value: 37.861
- type: ndcg_at_1
value: 30.137000000000004
- type: ndcg_at_10
value: 39.974
- type: ndcg_at_100
value: 45.647999999999996
- type: ndcg_at_1000
value: 48.259
- type: ndcg_at_3
value: 35.028
- type: ndcg_at_5
value: 37.175999999999995
- type: precision_at_1
value: 30.137000000000004
- type: precision_at_10
value: 7.363
- type: precision_at_100
value: 1.184
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 16.857
- type: precision_at_5
value: 11.963
- type: recall_at_1
value: 24.709
- type: recall_at_10
value: 52.087
- type: recall_at_100
value: 76.125
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 38.149
- type: recall_at_5
value: 43.984
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.40791666666667
- type: map_at_10
value: 32.458083333333335
- type: map_at_100
value: 33.691916666666664
- type: map_at_1000
value: 33.81191666666666
- type: map_at_3
value: 29.51625
- type: map_at_5
value: 31.168083333333335
- type: mrr_at_1
value: 27.96591666666666
- type: mrr_at_10
value: 36.528583333333344
- type: mrr_at_100
value: 37.404
- type: mrr_at_1000
value: 37.464333333333336
- type: mrr_at_3
value: 33.92883333333333
- type: mrr_at_5
value: 35.41933333333333
- type: ndcg_at_1
value: 27.96591666666666
- type: ndcg_at_10
value: 37.89141666666666
- type: ndcg_at_100
value: 43.23066666666666
- type: ndcg_at_1000
value: 45.63258333333333
- type: ndcg_at_3
value: 32.811249999999994
- type: ndcg_at_5
value: 35.22566666666667
- type: precision_at_1
value: 27.96591666666666
- type: precision_at_10
value: 6.834083333333332
- type: precision_at_100
value: 1.12225
- type: precision_at_1000
value: 0.15241666666666667
- type: precision_at_3
value: 15.264333333333335
- type: precision_at_5
value: 11.039416666666666
- type: recall_at_1
value: 23.40791666666667
- type: recall_at_10
value: 49.927083333333336
- type: recall_at_100
value: 73.44641666666668
- type: recall_at_1000
value: 90.19950000000001
- type: recall_at_3
value: 35.88341666666667
- type: recall_at_5
value: 42.061249999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.592000000000002
- type: map_at_10
value: 26.895999999999997
- type: map_at_100
value: 27.921000000000003
- type: map_at_1000
value: 28.02
- type: map_at_3
value: 24.883
- type: map_at_5
value: 25.812
- type: mrr_at_1
value: 22.698999999999998
- type: mrr_at_10
value: 29.520999999999997
- type: mrr_at_100
value: 30.458000000000002
- type: mrr_at_1000
value: 30.526999999999997
- type: mrr_at_3
value: 27.633000000000003
- type: mrr_at_5
value: 28.483999999999998
- type: ndcg_at_1
value: 22.698999999999998
- type: ndcg_at_10
value: 31.061
- type: ndcg_at_100
value: 36.398
- type: ndcg_at_1000
value: 38.89
- type: ndcg_at_3
value: 27.149
- type: ndcg_at_5
value: 28.627000000000002
- type: precision_at_1
value: 22.698999999999998
- type: precision_at_10
value: 5.106999999999999
- type: precision_at_100
value: 0.857
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 11.963
- type: precision_at_5
value: 8.221
- type: recall_at_1
value: 19.592000000000002
- type: recall_at_10
value: 41.329
- type: recall_at_100
value: 66.094
- type: recall_at_1000
value: 84.511
- type: recall_at_3
value: 30.61
- type: recall_at_5
value: 34.213
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.71
- type: map_at_10
value: 20.965
- type: map_at_100
value: 21.994
- type: map_at_1000
value: 22.133
- type: map_at_3
value: 18.741
- type: map_at_5
value: 19.951
- type: mrr_at_1
value: 18.307000000000002
- type: mrr_at_10
value: 24.66
- type: mrr_at_100
value: 25.540000000000003
- type: mrr_at_1000
value: 25.629
- type: mrr_at_3
value: 22.511
- type: mrr_at_5
value: 23.72
- type: ndcg_at_1
value: 18.307000000000002
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 30.229
- type: ndcg_at_1000
value: 33.623
- type: ndcg_at_3
value: 21.203
- type: ndcg_at_5
value: 23.006999999999998
- type: precision_at_1
value: 18.307000000000002
- type: precision_at_10
value: 4.725
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.14
- type: precision_at_5
value: 7.481
- type: recall_at_1
value: 14.71
- type: recall_at_10
value: 34.087
- type: recall_at_100
value: 57.147999999999996
- type: recall_at_1000
value: 81.777
- type: recall_at_3
value: 22.996
- type: recall_at_5
value: 27.73
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.472
- type: map_at_10
value: 32.699
- type: map_at_100
value: 33.867000000000004
- type: map_at_1000
value: 33.967000000000006
- type: map_at_3
value: 29.718
- type: map_at_5
value: 31.345
- type: mrr_at_1
value: 28.265
- type: mrr_at_10
value: 36.945
- type: mrr_at_100
value: 37.794
- type: mrr_at_1000
value: 37.857
- type: mrr_at_3
value: 34.266000000000005
- type: mrr_at_5
value: 35.768
- type: ndcg_at_1
value: 28.265
- type: ndcg_at_10
value: 38.35
- type: ndcg_at_100
value: 43.739
- type: ndcg_at_1000
value: 46.087
- type: ndcg_at_3
value: 33.004
- type: ndcg_at_5
value: 35.411
- type: precision_at_1
value: 28.265
- type: precision_at_10
value: 6.715999999999999
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 15.299
- type: precision_at_5
value: 10.951
- type: recall_at_1
value: 23.472
- type: recall_at_10
value: 51.413
- type: recall_at_100
value: 75.17
- type: recall_at_1000
value: 91.577
- type: recall_at_3
value: 36.651
- type: recall_at_5
value: 42.814
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.666
- type: map_at_10
value: 32.963
- type: map_at_100
value: 34.544999999999995
- type: map_at_1000
value: 34.792
- type: map_at_3
value: 29.74
- type: map_at_5
value: 31.5
- type: mrr_at_1
value: 29.051
- type: mrr_at_10
value: 38.013000000000005
- type: mrr_at_100
value: 38.997
- type: mrr_at_1000
value: 39.055
- type: mrr_at_3
value: 34.947
- type: mrr_at_5
value: 36.815
- type: ndcg_at_1
value: 29.051
- type: ndcg_at_10
value: 39.361000000000004
- type: ndcg_at_100
value: 45.186
- type: ndcg_at_1000
value: 47.867
- type: ndcg_at_3
value: 33.797
- type: ndcg_at_5
value: 36.456
- type: precision_at_1
value: 29.051
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 15.876000000000001
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 23.666
- type: recall_at_10
value: 51.858000000000004
- type: recall_at_100
value: 77.805
- type: recall_at_1000
value: 94.504
- type: recall_at_3
value: 36.207
- type: recall_at_5
value: 43.094
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.662
- type: map_at_10
value: 23.594
- type: map_at_100
value: 24.593999999999998
- type: map_at_1000
value: 24.694
- type: map_at_3
value: 20.925
- type: map_at_5
value: 22.817999999999998
- type: mrr_at_1
value: 17.375
- type: mrr_at_10
value: 25.734
- type: mrr_at_100
value: 26.586
- type: mrr_at_1000
value: 26.671
- type: mrr_at_3
value: 23.044
- type: mrr_at_5
value: 24.975
- type: ndcg_at_1
value: 17.375
- type: ndcg_at_10
value: 28.186
- type: ndcg_at_100
value: 33.436
- type: ndcg_at_1000
value: 36.203
- type: ndcg_at_3
value: 23.152
- type: ndcg_at_5
value: 26.397
- type: precision_at_1
value: 17.375
- type: precision_at_10
value: 4.677
- type: precision_at_100
value: 0.786
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.351
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 15.662
- type: recall_at_10
value: 40.066
- type: recall_at_100
value: 65.006
- type: recall_at_1000
value: 85.94000000000001
- type: recall_at_3
value: 27.400000000000002
- type: recall_at_5
value: 35.002
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.853
- type: map_at_10
value: 15.568000000000001
- type: map_at_100
value: 17.383000000000003
- type: map_at_1000
value: 17.584
- type: map_at_3
value: 12.561
- type: map_at_5
value: 14.056
- type: mrr_at_1
value: 18.958
- type: mrr_at_10
value: 28.288000000000004
- type: mrr_at_100
value: 29.432000000000002
- type: mrr_at_1000
value: 29.498
- type: mrr_at_3
value: 25.049
- type: mrr_at_5
value: 26.857
- type: ndcg_at_1
value: 18.958
- type: ndcg_at_10
value: 22.21
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 33.583
- type: ndcg_at_3
value: 16.994999999999997
- type: ndcg_at_5
value: 18.95
- type: precision_at_1
value: 18.958
- type: precision_at_10
value: 7.192
- type: precision_at_100
value: 1.5
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 12.573
- type: precision_at_5
value: 10.202
- type: recall_at_1
value: 8.853
- type: recall_at_10
value: 28.087
- type: recall_at_100
value: 53.701
- type: recall_at_1000
value: 76.29899999999999
- type: recall_at_3
value: 15.913
- type: recall_at_5
value: 20.658
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.077
- type: map_at_10
value: 20.788999999999998
- type: map_at_100
value: 30.429000000000002
- type: map_at_1000
value: 32.143
- type: map_at_3
value: 14.692
- type: map_at_5
value: 17.139
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.036
- type: mrr_at_100
value: 78.401
- type: mrr_at_1000
value: 78.404
- type: mrr_at_3
value: 76.75
- type: mrr_at_5
value: 77.47500000000001
- type: ndcg_at_1
value: 58.12500000000001
- type: ndcg_at_10
value: 44.015
- type: ndcg_at_100
value: 49.247
- type: ndcg_at_1000
value: 56.211999999999996
- type: ndcg_at_3
value: 49.151
- type: ndcg_at_5
value: 46.195
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 35.5
- type: precision_at_100
value: 11.355
- type: precision_at_1000
value: 2.1950000000000003
- type: precision_at_3
value: 53.083000000000006
- type: precision_at_5
value: 44.800000000000004
- type: recall_at_1
value: 9.077
- type: recall_at_10
value: 26.259
- type: recall_at_100
value: 56.547000000000004
- type: recall_at_1000
value: 78.551
- type: recall_at_3
value: 16.162000000000003
- type: recall_at_5
value: 19.753999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 49.44500000000001
- type: f1
value: 44.67067691783401
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.182
- type: map_at_10
value: 78.223
- type: map_at_100
value: 78.498
- type: map_at_1000
value: 78.512
- type: map_at_3
value: 76.71
- type: map_at_5
value: 77.725
- type: mrr_at_1
value: 73.177
- type: mrr_at_10
value: 82.513
- type: mrr_at_100
value: 82.633
- type: mrr_at_1000
value: 82.635
- type: mrr_at_3
value: 81.376
- type: mrr_at_5
value: 82.182
- type: ndcg_at_1
value: 73.177
- type: ndcg_at_10
value: 82.829
- type: ndcg_at_100
value: 83.84
- type: ndcg_at_1000
value: 84.07900000000001
- type: ndcg_at_3
value: 80.303
- type: ndcg_at_5
value: 81.846
- type: precision_at_1
value: 73.177
- type: precision_at_10
value: 10.241999999999999
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 31.247999999999998
- type: precision_at_5
value: 19.697
- type: recall_at_1
value: 68.182
- type: recall_at_10
value: 92.657
- type: recall_at_100
value: 96.709
- type: recall_at_1000
value: 98.184
- type: recall_at_3
value: 85.9
- type: recall_at_5
value: 89.755
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.108
- type: map_at_10
value: 33.342
- type: map_at_100
value: 35.281
- type: map_at_1000
value: 35.478
- type: map_at_3
value: 29.067
- type: map_at_5
value: 31.563000000000002
- type: mrr_at_1
value: 41.667
- type: mrr_at_10
value: 49.913000000000004
- type: mrr_at_100
value: 50.724000000000004
- type: mrr_at_1000
value: 50.766
- type: mrr_at_3
value: 47.504999999999995
- type: mrr_at_5
value: 49.033
- type: ndcg_at_1
value: 41.667
- type: ndcg_at_10
value: 41.144
- type: ndcg_at_100
value: 48.326
- type: ndcg_at_1000
value: 51.486
- type: ndcg_at_3
value: 37.486999999999995
- type: ndcg_at_5
value: 38.78
- type: precision_at_1
value: 41.667
- type: precision_at_10
value: 11.358
- type: precision_at_100
value: 1.873
- type: precision_at_1000
value: 0.244
- type: precision_at_3
value: 25
- type: precision_at_5
value: 18.519
- type: recall_at_1
value: 21.108
- type: recall_at_10
value: 47.249
- type: recall_at_100
value: 74.52
- type: recall_at_1000
value: 93.31
- type: recall_at_3
value: 33.271
- type: recall_at_5
value: 39.723000000000006
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.317
- type: map_at_10
value: 64.861
- type: map_at_100
value: 65.697
- type: map_at_1000
value: 65.755
- type: map_at_3
value: 61.258
- type: map_at_5
value: 63.590999999999994
- type: mrr_at_1
value: 80.635
- type: mrr_at_10
value: 86.528
- type: mrr_at_100
value: 86.66199999999999
- type: mrr_at_1000
value: 86.666
- type: mrr_at_3
value: 85.744
- type: mrr_at_5
value: 86.24300000000001
- type: ndcg_at_1
value: 80.635
- type: ndcg_at_10
value: 73.13199999999999
- type: ndcg_at_100
value: 75.927
- type: ndcg_at_1000
value: 76.976
- type: ndcg_at_3
value: 68.241
- type: ndcg_at_5
value: 71.071
- type: precision_at_1
value: 80.635
- type: precision_at_10
value: 15.326
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 43.961
- type: precision_at_5
value: 28.599999999999998
- type: recall_at_1
value: 40.317
- type: recall_at_10
value: 76.631
- type: recall_at_100
value: 87.495
- type: recall_at_1000
value: 94.362
- type: recall_at_3
value: 65.94200000000001
- type: recall_at_5
value: 71.499
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.686
- type: ap
value: 87.5577120393173
- type: f1
value: 91.6629447355139
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.702
- type: map_at_10
value: 36.414
- type: map_at_100
value: 37.561
- type: map_at_1000
value: 37.605
- type: map_at_3
value: 32.456
- type: map_at_5
value: 34.827000000000005
- type: mrr_at_1
value: 24.355
- type: mrr_at_10
value: 37.01
- type: mrr_at_100
value: 38.085
- type: mrr_at_1000
value: 38.123000000000005
- type: mrr_at_3
value: 33.117999999999995
- type: mrr_at_5
value: 35.452
- type: ndcg_at_1
value: 24.384
- type: ndcg_at_10
value: 43.456
- type: ndcg_at_100
value: 48.892
- type: ndcg_at_1000
value: 49.964
- type: ndcg_at_3
value: 35.475
- type: ndcg_at_5
value: 39.711
- type: precision_at_1
value: 24.384
- type: precision_at_10
value: 6.7940000000000005
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.052999999999999
- type: precision_at_5
value: 11.189
- type: recall_at_1
value: 23.702
- type: recall_at_10
value: 65.057
- type: recall_at_100
value: 90.021
- type: recall_at_1000
value: 98.142
- type: recall_at_3
value: 43.551
- type: recall_at_5
value: 53.738
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.62380300957591
- type: f1
value: 94.49871222100734
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.14090287277702
- type: f1
value: 60.32101258220515
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.84330867518494
- type: f1
value: 71.92248688515255
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.10692669804976
- type: f1
value: 77.9904839122866
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.822988923078444
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.38394880253403
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.82504612539082
- type: mrr
value: 32.84462298174977
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.029
- type: map_at_10
value: 14.088999999999999
- type: map_at_100
value: 17.601
- type: map_at_1000
value: 19.144
- type: map_at_3
value: 10.156
- type: map_at_5
value: 11.892
- type: mrr_at_1
value: 46.44
- type: mrr_at_10
value: 56.596999999999994
- type: mrr_at_100
value: 57.11000000000001
- type: mrr_at_1000
value: 57.14
- type: mrr_at_3
value: 54.334
- type: mrr_at_5
value: 55.774
- type: ndcg_at_1
value: 44.891999999999996
- type: ndcg_at_10
value: 37.134
- type: ndcg_at_100
value: 33.652
- type: ndcg_at_1000
value: 42.548
- type: ndcg_at_3
value: 41.851
- type: ndcg_at_5
value: 39.842
- type: precision_at_1
value: 46.44
- type: precision_at_10
value: 27.647
- type: precision_at_100
value: 8.309999999999999
- type: precision_at_1000
value: 2.146
- type: precision_at_3
value: 39.422000000000004
- type: precision_at_5
value: 34.675
- type: recall_at_1
value: 6.029
- type: recall_at_10
value: 18.907
- type: recall_at_100
value: 33.76
- type: recall_at_1000
value: 65.14999999999999
- type: recall_at_3
value: 11.584999999999999
- type: recall_at_5
value: 14.626
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.373000000000005
- type: map_at_10
value: 55.836
- type: map_at_100
value: 56.611999999999995
- type: map_at_1000
value: 56.63
- type: map_at_3
value: 51.747
- type: map_at_5
value: 54.337999999999994
- type: mrr_at_1
value: 44.147999999999996
- type: mrr_at_10
value: 58.42699999999999
- type: mrr_at_100
value: 58.902
- type: mrr_at_1000
value: 58.914
- type: mrr_at_3
value: 55.156000000000006
- type: mrr_at_5
value: 57.291000000000004
- type: ndcg_at_1
value: 44.119
- type: ndcg_at_10
value: 63.444
- type: ndcg_at_100
value: 66.40599999999999
- type: ndcg_at_1000
value: 66.822
- type: ndcg_at_3
value: 55.962
- type: ndcg_at_5
value: 60.228
- type: precision_at_1
value: 44.119
- type: precision_at_10
value: 10.006
- type: precision_at_100
value: 1.17
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.135
- type: precision_at_5
value: 17.59
- type: recall_at_1
value: 39.373000000000005
- type: recall_at_10
value: 83.78999999999999
- type: recall_at_100
value: 96.246
- type: recall_at_1000
value: 99.324
- type: recall_at_3
value: 64.71900000000001
- type: recall_at_5
value: 74.508
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.199
- type: map_at_10
value: 82.892
- type: map_at_100
value: 83.578
- type: map_at_1000
value: 83.598
- type: map_at_3
value: 79.948
- type: map_at_5
value: 81.779
- type: mrr_at_1
value: 79.67
- type: mrr_at_10
value: 86.115
- type: mrr_at_100
value: 86.249
- type: mrr_at_1000
value: 86.251
- type: mrr_at_3
value: 85.08200000000001
- type: mrr_at_5
value: 85.783
- type: ndcg_at_1
value: 79.67
- type: ndcg_at_10
value: 86.839
- type: ndcg_at_100
value: 88.252
- type: ndcg_at_1000
value: 88.401
- type: ndcg_at_3
value: 83.86200000000001
- type: ndcg_at_5
value: 85.473
- type: precision_at_1
value: 79.67
- type: precision_at_10
value: 13.19
- type: precision_at_100
value: 1.521
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 36.677
- type: precision_at_5
value: 24.118000000000002
- type: recall_at_1
value: 69.199
- type: recall_at_10
value: 94.321
- type: recall_at_100
value: 99.20400000000001
- type: recall_at_1000
value: 99.947
- type: recall_at_3
value: 85.787
- type: recall_at_5
value: 90.365
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82810046856353
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.38132611783628
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.127000000000001
- type: map_at_10
value: 12.235
- type: map_at_100
value: 14.417
- type: map_at_1000
value: 14.75
- type: map_at_3
value: 8.906
- type: map_at_5
value: 10.591000000000001
- type: mrr_at_1
value: 25.2
- type: mrr_at_10
value: 35.879
- type: mrr_at_100
value: 36.935
- type: mrr_at_1000
value: 36.997
- type: mrr_at_3
value: 32.783
- type: mrr_at_5
value: 34.367999999999995
- type: ndcg_at_1
value: 25.2
- type: ndcg_at_10
value: 20.509
- type: ndcg_at_100
value: 28.67
- type: ndcg_at_1000
value: 34.42
- type: ndcg_at_3
value: 19.948
- type: ndcg_at_5
value: 17.166
- type: precision_at_1
value: 25.2
- type: precision_at_10
value: 10.440000000000001
- type: precision_at_100
value: 2.214
- type: precision_at_1000
value: 0.359
- type: precision_at_3
value: 18.533
- type: precision_at_5
value: 14.860000000000001
- type: recall_at_1
value: 5.127000000000001
- type: recall_at_10
value: 21.147
- type: recall_at_100
value: 44.946999999999996
- type: recall_at_1000
value: 72.89
- type: recall_at_3
value: 11.277
- type: recall_at_5
value: 15.042
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.0373011786213
- type: cos_sim_spearman
value: 79.27889560856613
- type: euclidean_pearson
value: 80.31186315495655
- type: euclidean_spearman
value: 79.41630415280811
- type: manhattan_pearson
value: 80.31755140442013
- type: manhattan_spearman
value: 79.43069870027611
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.8659751342045
- type: cos_sim_spearman
value: 76.95377612997667
- type: euclidean_pearson
value: 81.24552945497848
- type: euclidean_spearman
value: 77.18236963555253
- type: manhattan_pearson
value: 81.26477607759037
- type: manhattan_spearman
value: 77.13821753062756
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.34597139044875
- type: cos_sim_spearman
value: 84.124169425592
- type: euclidean_pearson
value: 83.68590721511401
- type: euclidean_spearman
value: 84.18846190846398
- type: manhattan_pearson
value: 83.57630235061498
- type: manhattan_spearman
value: 84.10244043726902
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.67641885599572
- type: cos_sim_spearman
value: 80.46450725650428
- type: euclidean_pearson
value: 81.61645042715865
- type: euclidean_spearman
value: 80.61418394236874
- type: manhattan_pearson
value: 81.55712034928871
- type: manhattan_spearman
value: 80.57905670523951
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.86650310886782
- type: cos_sim_spearman
value: 89.76081629222328
- type: euclidean_pearson
value: 89.1530747029954
- type: euclidean_spearman
value: 89.80990657280248
- type: manhattan_pearson
value: 89.10640563278132
- type: manhattan_spearman
value: 89.76282108434047
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.93864027911118
- type: cos_sim_spearman
value: 85.47096193999023
- type: euclidean_pearson
value: 85.03141840870533
- type: euclidean_spearman
value: 85.43124029598181
- type: manhattan_pearson
value: 84.99002664393512
- type: manhattan_spearman
value: 85.39169195120834
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.7045343749832
- type: cos_sim_spearman
value: 89.03262221146677
- type: euclidean_pearson
value: 89.56078218264365
- type: euclidean_spearman
value: 89.17827006466868
- type: manhattan_pearson
value: 89.52717595468582
- type: manhattan_spearman
value: 89.15878115952923
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.20191302875551
- type: cos_sim_spearman
value: 64.11446552557646
- type: euclidean_pearson
value: 64.6918197393619
- type: euclidean_spearman
value: 63.440182631197764
- type: manhattan_pearson
value: 64.55692904121835
- type: manhattan_spearman
value: 63.424877742756266
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.37793104662344
- type: cos_sim_spearman
value: 87.7357802629067
- type: euclidean_pearson
value: 87.4286301545109
- type: euclidean_spearman
value: 87.78452920777421
- type: manhattan_pearson
value: 87.42445169331255
- type: manhattan_spearman
value: 87.78537677249598
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.31465405081792
- type: mrr
value: 95.7173781193389
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.904
- type: map_at_100
value: 68.539
- type: map_at_1000
value: 68.562
- type: map_at_3
value: 65.415
- type: map_at_5
value: 66.788
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 68.797
- type: mrr_at_100
value: 69.236
- type: mrr_at_1000
value: 69.257
- type: mrr_at_3
value: 66.667
- type: mrr_at_5
value: 67.967
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 72.24199999999999
- type: ndcg_at_100
value: 74.86
- type: ndcg_at_1000
value: 75.354
- type: ndcg_at_3
value: 67.93400000000001
- type: ndcg_at_5
value: 70.02199999999999
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.778000000000002
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.383
- type: recall_at_100
value: 96.267
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.094
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8029702970297
- type: cos_sim_ap
value: 94.9210324173411
- type: cos_sim_f1
value: 89.8521162672106
- type: cos_sim_precision
value: 91.67533818938605
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.69504950495049
- type: dot_ap
value: 90.4919719146181
- type: dot_f1
value: 84.72289156626506
- type: dot_precision
value: 81.76744186046511
- type: dot_recall
value: 87.9
- type: euclidean_accuracy
value: 99.79702970297029
- type: euclidean_ap
value: 94.87827463795753
- type: euclidean_f1
value: 89.55680081507896
- type: euclidean_precision
value: 91.27725856697819
- type: euclidean_recall
value: 87.9
- type: manhattan_accuracy
value: 99.7990099009901
- type: manhattan_ap
value: 94.87587025149682
- type: manhattan_f1
value: 89.76298537569339
- type: manhattan_precision
value: 90.53916581892166
- type: manhattan_recall
value: 89
- type: max_accuracy
value: 99.8029702970297
- type: max_ap
value: 94.9210324173411
- type: max_f1
value: 89.8521162672106
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.92385753948724
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.671756975431144
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.677928036739004
- type: mrr
value: 51.56413133435193
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.523589340819683
- type: cos_sim_spearman
value: 30.187407518823235
- type: dot_pearson
value: 29.039713969699015
- type: dot_spearman
value: 29.114740651155508
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.211
- type: map_at_10
value: 1.6199999999999999
- type: map_at_100
value: 8.658000000000001
- type: map_at_1000
value: 21.538
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.919
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.18599999999999
- type: mrr_at_100
value: 86.18599999999999
- type: mrr_at_1000
value: 86.18599999999999
- type: mrr_at_3
value: 85
- type: mrr_at_5
value: 85.9
- type: ndcg_at_1
value: 74
- type: ndcg_at_10
value: 66.542
- type: ndcg_at_100
value: 50.163999999999994
- type: ndcg_at_1000
value: 45.696999999999996
- type: ndcg_at_3
value: 71.531
- type: ndcg_at_5
value: 70.45
- type: precision_at_1
value: 78
- type: precision_at_10
value: 69.39999999999999
- type: precision_at_100
value: 51.06
- type: precision_at_1000
value: 20.022000000000002
- type: precision_at_3
value: 76
- type: precision_at_5
value: 74.8
- type: recall_at_1
value: 0.211
- type: recall_at_10
value: 1.813
- type: recall_at_100
value: 12.098
- type: recall_at_1000
value: 42.618
- type: recall_at_3
value: 0.603
- type: recall_at_5
value: 0.987
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.2079999999999997
- type: map_at_10
value: 7.777000000000001
- type: map_at_100
value: 12.825000000000001
- type: map_at_1000
value: 14.196
- type: map_at_3
value: 4.285
- type: map_at_5
value: 6.177
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 42.635
- type: mrr_at_100
value: 43.955
- type: mrr_at_1000
value: 43.955
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 41.088
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 20.666999999999998
- type: ndcg_at_100
value: 31.840000000000003
- type: ndcg_at_1000
value: 43.191
- type: ndcg_at_3
value: 23.45
- type: ndcg_at_5
value: 22.994
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 17.959
- type: precision_at_100
value: 6.755
- type: precision_at_1000
value: 1.4200000000000002
- type: precision_at_3
value: 23.810000000000002
- type: precision_at_5
value: 23.673
- type: recall_at_1
value: 2.2079999999999997
- type: recall_at_10
value: 13.144
- type: recall_at_100
value: 42.491
- type: recall_at_1000
value: 77.04299999999999
- type: recall_at_3
value: 5.3469999999999995
- type: recall_at_5
value: 9.139
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9044
- type: ap
value: 14.625783489340755
- type: f1
value: 54.814936562590546
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.94227504244483
- type: f1
value: 61.22516038508854
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.602409155145864
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.94641473445789
- type: cos_sim_ap
value: 76.91572747061197
- type: cos_sim_f1
value: 70.14348097317529
- type: cos_sim_precision
value: 66.53254437869822
- type: cos_sim_recall
value: 74.1688654353562
- type: dot_accuracy
value: 84.80061989628658
- type: dot_ap
value: 70.7952548895177
- type: dot_f1
value: 65.44780728844965
- type: dot_precision
value: 61.53310104529617
- type: dot_recall
value: 69.89445910290237
- type: euclidean_accuracy
value: 86.94641473445789
- type: euclidean_ap
value: 76.80774009393652
- type: euclidean_f1
value: 70.30522503879979
- type: euclidean_precision
value: 68.94977168949772
- type: euclidean_recall
value: 71.71503957783642
- type: manhattan_accuracy
value: 86.8629671574179
- type: manhattan_ap
value: 76.76518632600317
- type: manhattan_f1
value: 70.16056518946692
- type: manhattan_precision
value: 68.360450563204
- type: manhattan_recall
value: 72.0580474934037
- type: max_accuracy
value: 86.94641473445789
- type: max_ap
value: 76.91572747061197
- type: max_f1
value: 70.30522503879979
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.10428066907285
- type: cos_sim_ap
value: 86.25114759921435
- type: cos_sim_f1
value: 78.37857884586856
- type: cos_sim_precision
value: 75.60818546078993
- type: cos_sim_recall
value: 81.35971666153372
- type: dot_accuracy
value: 87.41995575736406
- type: dot_ap
value: 81.51838010086782
- type: dot_f1
value: 74.77398015435503
- type: dot_precision
value: 71.53002390662354
- type: dot_recall
value: 78.32614721281182
- type: euclidean_accuracy
value: 89.12368533395428
- type: euclidean_ap
value: 86.33456799874504
- type: euclidean_f1
value: 78.45496750232127
- type: euclidean_precision
value: 75.78388462366364
- type: euclidean_recall
value: 81.32121958731136
- type: manhattan_accuracy
value: 89.10622113556099
- type: manhattan_ap
value: 86.31215061745333
- type: manhattan_f1
value: 78.40684906011539
- type: manhattan_precision
value: 75.89536643366722
- type: manhattan_recall
value: 81.09023714197721
- type: max_accuracy
value: 89.12368533395428
- type: max_ap
value: 86.33456799874504
- type: max_f1
value: 78.45496750232127
language:
- en
license: mit
---
# E5-large-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 24 layers and the embedding size is 1024.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2')
model = AutoModel.from_pretrained('intfloat/e5-large-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
|
Ocelotr/speecht5_tts-sil
|
Ocelotr
| 2023-06-28T20:18:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"ara",
"generated_from_trainer",
"ar",
"dataset:SDA_CLEAN_NAJDI",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-26T16:45:53Z |
---
language:
- ar
license: mit
tags:
- ara
- generated_from_trainer
datasets:
- SDA_CLEAN_NAJDI
model-index:
- name: SpeechT5 TTS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the SDA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.5703 | 1.49 | 1000 | 0.5289 |
| 0.541 | 2.98 | 2000 | 0.5131 |
| 0.5487 | 4.46 | 3000 | 0.5059 |
| 0.5232 | 5.95 | 4000 | 0.5011 |
| 0.5295 | 7.44 | 5000 | 0.4979 |
| 0.5257 | 8.93 | 6000 | 0.4970 |
| 0.5091 | 10.42 | 7000 | 0.4905 |
| 0.5141 | 11.9 | 8000 | 0.4893 |
| 0.5033 | 13.39 | 9000 | 0.4865 |
| 0.507 | 14.88 | 10000 | 0.4850 |
| 0.502 | 16.37 | 11000 | 0.4830 |
| 0.497 | 17.86 | 12000 | 0.4823 |
| 0.4974 | 19.35 | 13000 | 0.4801 |
| 0.4993 | 20.83 | 14000 | 0.4794 |
| 0.496 | 22.32 | 15000 | 0.4814 |
| 0.4845 | 23.81 | 16000 | 0.4780 |
| 0.4977 | 25.3 | 17000 | 0.4775 |
| 0.4888 | 26.79 | 18000 | 0.4780 |
| 0.4773 | 28.27 | 19000 | 0.4792 |
| 0.4914 | 29.76 | 20000 | 0.4817 |
| 0.4864 | 31.25 | 21000 | 0.4775 |
| 0.486 | 32.74 | 22000 | 0.4773 |
| 0.4884 | 34.23 | 23000 | 0.4835 |
| 0.4856 | 35.71 | 24000 | 0.4788 |
| 0.4814 | 37.2 | 25000 | 0.4811 |
| 0.4831 | 38.69 | 26000 | 0.4814 |
| 0.4732 | 40.18 | 27000 | 0.4816 |
| 0.4846 | 41.67 | 28000 | 0.4812 |
| 0.4731 | 43.15 | 29000 | 0.4843 |
| 0.4772 | 44.64 | 30000 | 0.4830 |
| 0.4793 | 46.13 | 31000 | 0.4834 |
| 0.4736 | 47.62 | 32000 | 0.4834 |
| 0.4798 | 49.11 | 33000 | 0.4826 |
| 0.4744 | 50.6 | 34000 | 0.4841 |
| 0.4784 | 52.08 | 35000 | 0.4844 |
| 0.4743 | 53.57 | 36000 | 0.4851 |
| 0.4779 | 55.06 | 37000 | 0.4854 |
| 0.4719 | 56.55 | 38000 | 0.4854 |
| 0.4825 | 58.04 | 39000 | 0.4856 |
| 0.4805 | 59.52 | 40000 | 0.4853 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
finding-fossils/metaextractor
|
finding-fossils
| 2023-06-28T20:05:50Z | 112 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"Beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T16:24:37Z |
---
tags:
- Beta
license: mit
thumbnail: >-
https://huggingface.co/finding-fossils/metaextractor/resolve/main/ffossils-logo-text.png
widget:
- text: The core sample was aged at 12300 - 13500 BP and found at 210m a.s.l.
example_title: Age/Alti
- text: In Northern Canada, the BGC site core was primarily made up of Pinus pollen.
example_title: Taxa/Site/Region
metrics:
- precision
- recall
---
<img src="https://huggingface.co/finding-fossils/metaextractor/resolve/main/ffossils-logo-text.png" width="400">
# MetaExtractor
<!-- Provide a quick summary of what the model is/does. -->
This model extracts metadata from research articles related to Paleoecology.
The entities detected by this model are:
- **AGE**: when historical ages are mentioned such as 1234 AD or 4567 BP (before present)
- **TAXA**: plant or animal taxa names indicating what samples contained
- **GEOG**: geographic coordinates indicating where samples were excavated from, e.g. 12'34"N 34'23"W
- **SITE**: site names for where samples were excavated from
- **REGION**: more general regions to provide context for where sites are located
- **EMAIL**: researcher emails in the articles able to be used for follow-up contact
- **ALTI**: altitudes of sites from where samples were excavated, e.g. 123 m a.s.l (above sea level)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Ty Andrews, Jenit Jain, Shaun Hutchinson, Kelly Wu, and Simon Goring
- **Shared by:** Neotoma Paleocology Database
- **Model type:** Token Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** roberta-base
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/NeotomaDB/MetaExtractor
- **Paper:** TBD
- **Demo:** TBD
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model can be used to extract entities from any text that are Paeleoecology related or tangential. Potential uses include identifying unique SITE names in research papers in other domains.
### Direct Use
This model is deployed on the xDD (formerly GeoDeepDive) servers where it is getting fed new research articles relevant to Neotoma and returning the extracted data.
This approach could be adapted to other domains by using the training and development code found [github.com/NeotomaDB/MetaExtractor](https://github.com/NeotomaDB/MetaExtractor) to run similar data extraction for other research domains.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model was trained entirely on English research articles and will likely not perform well on research in other languages. Also, the articles used to train the model were chosen based on being already present in the Neotoma database and therefore may have selection bias as they represent what is already known to be relevant to Neotoma and may not correctly manage new, previously missed articles.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("finding-fossils/metaextractor")
model = AutoModelForTokenClassification.from_pretrained("finding-fossils/metaextractor")
ner_pipe = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner_pipe("In Northern Canada, the BGC site core was primarily made up of Pinus pollen.")
# Output
[
{
"entity_group": "REGION",
"score": 0.8088379502296448,
"word": " Northern Canada,",
"start": 3,
"end": 19
},
{
"entity_group": "SITE",
"score": 0.8307041525840759,
"word": " BGC",
"start": 24,
"end": 27
},
{
"entity_group": "TAXA",
"score": 0.9806344509124756,
"word": " Pinus",
"start": 63,
"end": 68
}
]
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was trained using a set of 39 research articles deemed relevant to the Neotoma Database. All articles were written in English. The entities were labeled by the project team along with using pre-labelling with early models to speed up the labelling process.
A 70/15/15 train/val/test split was used which had the following breakdown of words and entities.
| | Train | Validation | Test|
|---|:---:|:---:|:---:|
|Articles| 28 | 6 | 6|
| Words | 220857 | 37809 | 36098 |
|TAXA Entities | 3352 | 650 | 570 |
|SITE Entities | 1228 | 177 | 219 |
| REGION Entities | 2314 | 318 | 258 |
|GEOG Entities | 188 | 37 | 8 |
|AGE Entities | 919 | 206 | 153 |
|ALTI Entities | 99 | 24 | 14 |
| Email Entities | 14 | 4 | 11 |
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
For full training details please see the GitHub repository and Wiki: [github.com/NeotomaDB/MetaExtractor](https://github.com/NeotomaDB/MetaExtractor)
## Results & Metrics
For full model results see the report here: [Final Project Report](https://github.com/NeotomaDB/MetaExtractor/blob/main/reports/final/finding-fossils-final.pdf)
|
ahishamm/vit-huge-isic-patch-14
|
ahishamm
| 2023-06-28T20:05:46Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-28T19:59:05Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-isic-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-isic-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/isic_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6077
- Accuracy: 0.7917
- Recall: 0.7917
- F1: 0.7917
- Precision: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
globuslabs/ScholarBERT_10_WB
|
globuslabs
| 2023-06-28T20:00:24Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"science",
"multi-displinary",
"en",
"arxiv:2205.11342",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-22T22:30:01Z |
---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT_10_WB Model
This is the **ScholarBERT_10_WB** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**22.1B tokens**).
Additionally, the pretraining data also includes the Wikipedia+BookCorpus, which are used to pretrain the [BERT-base](https://huggingface.co/bert-base-cased) and [BERT-large](https://huggingface.co/bert-large-cased) models.
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 24 |
| Hidden Size | 1024 |
| Attention Heads | 16 |
| Total Parameters | 340M |
# Training Dataset
The vocab and the model are pertrained on **10% of the PRD** scientific literature dataset and Wikipedia+BookCorpus.
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2023diminishing,
title={The Diminishing Returns of Masked Language Models to Science},
author={Zhi Hong and Aswathy Ajith and Gregory Pauloski and Eamon Duede and Kyle Chard and Ian Foster},
year={2023},
eprint={2205.11342},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
globuslabs/ScholarBERT_100_WB
|
globuslabs
| 2023-06-28T20:00:01Z | 115 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"science",
"multi-displinary",
"en",
"arxiv:2205.11342",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-22T22:27:22Z |
---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT_100_WB Model
This is the **ScholarBERT_100_WB** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**221B tokens**).
Additionally, the pretraining data also includes the Wikipedia+BookCorpus, which are used to pretrain the [BERT-base](https://huggingface.co/bert-base-cased) and [BERT-large](https://huggingface.co/bert-large-cased) models.
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 24 |
| Hidden Size | 1024 |
| Attention Heads | 16 |
| Total Parameters | 340M |
# Training Dataset
The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset and the Wikipedia+BookCorpus.
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2023diminishing,
title={The Diminishing Returns of Masked Language Models to Science},
author={Zhi Hong and Aswathy Ajith and Gregory Pauloski and Eamon Duede and Kyle Chard and Ian Foster},
year={2023},
eprint={2205.11342},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
globuslabs/ScholarBERT
|
globuslabs
| 2023-06-28T19:59:26Z | 115 | 9 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"science",
"multi-displinary",
"en",
"arxiv:2205.11342",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-22T22:15:16Z |
---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT_100 Model
This is the **ScholarBERT_100** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**221B tokens**).
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 24 |
| Hidden Size | 1024 |
| Attention Heads | 16 |
| Total Parameters | 340M |
# Training Dataset
The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset.
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2023diminishing,
title={The Diminishing Returns of Masked Language Models to Science},
author={Zhi Hong and Aswathy Ajith and Gregory Pauloski and Eamon Duede and Kyle Chard and Ian Foster},
year={2023},
eprint={2205.11342},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
rttl-ai/bert-base-uncased-yelp-polarity
|
rttl-ai
| 2023-06-28T19:58:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:yelp_polarity",
"arxiv:2004.10964",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T14:16:21Z |
---
license: apache-2.0
datasets:
- yelp_polarity
language:
- en
model-index:
- name: rttl-ai/bert-base-uncased-polarity
results:
- task:
type: text classification
name: binary_classification
dataset:
type: yelp_polarity
name: yelp_polarity
config: default
split: test
metrics:
- type: accuracy
value: 0.98
name: Accuracy
---
## Model Details
**Model Description:** This model is a fine-tune checkpoint of [bert-base-uncased](https://huggingface.co/bert-base-uncased), fine-tuned on yelp reviews.
- **Developed by:** rttl-ai
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Resources for more information:**
- The model was pre-trained with task-adaptive pre-training [TAPT](https://arxiv.org/pdf/2004.10964.pdf) with an increased masking rate, no corruption strategy, and using WWM, following [this paper](https://aclanthology.org/2023.eacl-main.217.pdf)
- pre-trained on yelp reviews
- fine-tuned on yelp reviews for binary class text classification
|
YakovElm/MariaDB_5_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T19:55:08Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T19:54:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_5_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_5_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0696
- Train Accuracy: 0.9771
- Validation Loss: 0.4323
- Validation Accuracy: 0.9221
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4972 | 0.7607 | 0.2633 | 0.9121 | 0 |
| 0.1913 | 0.9294 | 0.3861 | 0.9121 | 1 |
| 0.0696 | 0.9771 | 0.4323 | 0.9221 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ahishamm/vit-large-isic-patch-16
|
ahishamm
| 2023-06-28T19:52:45Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-28T19:47:01Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-isic-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-isic-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/isic_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7317
- Accuracy: 0.75
- Recall: 0.75
- F1: 0.75
- Precision: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi
|
shirsh10mall
| 2023-06-28T19:52:16Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T03:08:56Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5746
- Validation Loss: 4.4287
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 10, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.5743 | 4.4287 | 0 |
| 4.5751 | 4.4287 | 1 |
| 4.5730 | 4.4287 | 2 |
| 4.5752 | 4.4287 | 3 |
| 4.5757 | 4.4287 | 4 |
| 4.5753 | 4.4287 | 5 |
| 4.5729 | 4.4287 | 6 |
| 4.5759 | 4.4287 | 7 |
| 4.5749 | 4.4287 | 8 |
| 4.5746 | 4.4287 | 9 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
rttl-ai/bert-base-uncased-yelp-reviews
|
rttl-ai
| 2023-06-28T19:48:37Z | 104 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:yelp_review_full",
"arxiv:2004.10964",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T14:07:48Z |
---
license: apache-2.0
datasets:
- yelp_review_full
language:
- en
model-index:
- name: rttl-ai/bert-base-uncased-reviews
results:
- task:
type: text classification
name: multi_class_classification
dataset:
type: yelp_review_full
name: yelp_review_full
config: default
split: test
metrics:
- type: accuracy
value: 0.7037
name: Accuracy
---
## Model Details
**Model Description:** This model is a fine-tune checkpoint of [bert-base-uncased](https://huggingface.co/bert-base-uncased), fine-tuned on yelp reviews.
- **Developed by:** rttl-ai
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Resources for more information:**
- The model was pre-trained with task-adaptive pre-training [TAPT](https://arxiv.org/pdf/2004.10964.pdf) with an increased masking rate, no corruption strategy, and using WWM, following [this paper](https://aclanthology.org/2023.eacl-main.217.pdf)
- pre-trained on yelp reviews
- fine-tuned on yelp reviews for multi-class text classification
|
ahishamm/vit-base-isic-patch-32
|
ahishamm
| 2023-06-28T19:46:46Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-28T19:41:16Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-isic-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-isic-patch-32
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the ahishamm/isic_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5791
- Accuracy: 0.7778
- Recall: 0.7778
- F1: 0.7778
- Precision: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jumartineze/xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos
|
Jumartineze
| 2023-06-28T19:45:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T17:15:13Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9935
- F1: 0.5903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.005 | 1.0 | 766 | 0.9719 | 0.5623 |
| 0.8756 | 2.0 | 1532 | 0.9842 | 0.5655 |
| 0.753 | 3.0 | 2298 | 0.9935 | 0.5903 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
S3S3/ppo-Pyramids_Training1
|
S3S3
| 2023-06-28T19:42:01Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:41:53Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: S3S3/ppo-Pyramids_Training1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
amittian/setfit_ds_version_0_0_3
|
amittian
| 2023-06-28T19:41:36Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-28T19:41:15Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amittian/setfit_ds_version_0_0_3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amittian/setfit_ds_version_0_0_3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
richardr1126/guanaco-13b-merged
|
richardr1126
| 2023-06-28T19:23:40Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"guanaco",
"13b",
"dataset:timdettmers/openassistant-guanaco",
"arxiv:2305.14314",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T01:04:14Z |
---
datasets:
- timdettmers/openassistant-guanaco
tags:
- llama
- guanaco
- 13b
---
This model was created by merging [timdettmers/guanaco-13b](https://huggingface.co/timdettmers/guanaco-13b) QLoRa adapter with the base LLaMA-13b model.
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
magnustragardh/Reinforce-Pixelcopter-PLE-v0
|
magnustragardh
| 2023-06-28T19:22:59Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:22:55Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 49.30 +/- 29.27
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours chekc Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vlkn/falcon_finetuned3
|
vlkn
| 2023-06-28T19:15:44Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-06-28T18:43:45Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon_finetuned3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_finetuned3
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 60
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DarkAirforce/q-FrozenLake-v1-4x4-noSlippery
|
DarkAirforce
| 2023-06-28T19:14:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:14:54Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DarkAirforce/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
0x05a4/DeepRL-PPO-Huggy
|
0x05a4
| 2023-06-28T19:13:40Z | 17 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:13:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 0x05a4/DeepRL-PPO-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1
|
cleanrl
| 2023-06-28T19:12:37Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Pusher-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:12:22Z |
---
tags:
- Pusher-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pusher-v4
type: Pusher-v4
metrics:
- type: mean_reward
value: -28.99 +/- 2.80
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Pusher-v4**
This is a trained model of a DDPG agent playing Pusher-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Pusher-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Pusher-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'Pusher-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
YakovElm/Jira_20_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T19:05:58Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T19:05:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_20_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_20_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0521
- Train Accuracy: 0.9856
- Validation Loss: 0.4925
- Validation Accuracy: 0.8612
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4491 | 0.7989 | 0.6370 | 0.6656 | 0 |
| 0.1533 | 0.9514 | 0.3511 | 0.9211 | 1 |
| 0.0521 | 0.9856 | 0.4925 | 0.8612 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Hyeonseo/ko_roberta_small_model
|
Hyeonseo
| 2023-06-28T19:04:11Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-28T15:00:39Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ko_roberta_small_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: Precision
type: precision
value: 0.6827303934512807
- name: Recall
type: recall
value: 0.7253980500806622
- name: F1
type: f1
value: 0.703417786090801
- name: Accuracy
type: accuracy
value: 0.9403969397937687
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ko_roberta_small_model
This model is a fine-tuned version of [hyeonseo/ko_roberta_small_model](https://huggingface.co/hyeonseo/ko_roberta_small_model) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1864
- Precision: 0.6827
- Recall: 0.7254
- F1: 0.7034
- Accuracy: 0.9404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2341 | 1.0 | 1313 | 0.2069 | 0.6516 | 0.6999 | 0.6749 | 0.9336 |
| 0.16 | 2.0 | 2626 | 0.1864 | 0.6827 | 0.7254 | 0.7034 | 0.9404 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
S3S3/ppo-SnowballTarget
|
S3S3
| 2023-06-28T19:02:06Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:39:43Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: S3S3/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Niyazi/distilbert-base-uncased-finetuned-emotion
|
Niyazi
| 2023-06-28T18:44:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T18:23:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8975
- name: F1
type: f1
value: 0.8933293061586592
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3575
- Accuracy: 0.8975
- F1: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5497 | 0.826 | 0.8003 |
| 0.7506 | 2.0 | 250 | 0.3575 | 0.8975 | 0.8933 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
anwarrehman/simply-law-classify-v1.1
|
anwarrehman
| 2023-06-28T18:41:16Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-28T18:40:09Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
hugo1499/bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
hugo1499
| 2023-06-28T18:36:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T00:10:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1233
- Accuracy: 0.5208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1308 | 1.0 | 383 | 1.1263 | 0.4914 |
| 0.9642 | 2.0 | 766 | 1.0872 | 0.5150 |
| 0.813 | 3.0 | 1149 | 1.1233 | 0.5208 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pedrogengo/listwise_longt5_1k_msmarco
|
pedrogengo
| 2023-06-28T18:30:52Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T18:07:44Z |
---
language:
- en
library_name: transformers
---
|
PickleYard/Elysian-Fields
|
PickleYard
| 2023-06-28T18:22:47Z | 13 | 0 |
diffusers
|
[
"diffusers",
"artificial-intelligence",
"ai-art",
"anime-style",
"content-creation",
"animation",
"dreamshaper",
"text-to-image",
"en",
"dataset:unknown",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T03:54:18Z |
---
language: en
library: diffusers
tags:
- artificial-intelligence
- ai-art
- anime-style
- content-creation
- animation
- dreamshaper
- text-to-image
license: other
datasets:
- unknown
library_name: diffusers
---
## Model Description
"Elysian Fields" is a model based on the original Dreamshaper model. The primary objective of this model is to generate artificial intelligence (AI)-driven art and animations that mimic the style of traditional paintings. It excels in creating lifelike portraits, intricate backgrounds, and anime-style characters.
Originally designed for creating unique portraits that transcend the boundaries of computer graphics and heavily-filtered photographs, this model has evolved to become an integral part of content creation and independent animation production. Leveraging the power of LoRA networks, it also supports the generation of anime-style images.
This model is hosted on Hugging Face Model Hub, and can be used for a wide range of creative applications, from content creation to animation and beyond.
|
Cordelia/frozenLake
|
Cordelia
| 2023-06-28T18:19:02Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:18:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: frozenLake
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Cordelia/frozenLake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cleanrl/InvertedPendulum-v4-ddpg_continuous_action_jax-seed1
|
cleanrl
| 2023-06-28T18:18:47Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"InvertedPendulum-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:18:33Z |
---
tags:
- InvertedPendulum-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: InvertedPendulum-v4
type: InvertedPendulum-v4
metrics:
- type: mean_reward
value: 988.90 +/- 33.30
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **InvertedPendulum-v4**
This is a trained model of a DDPG agent playing InvertedPendulum-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id InvertedPendulum-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id InvertedPendulum-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'InvertedPendulum-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Walker2d-v4-ddpg_continuous_action-seed1
|
cleanrl
| 2023-06-28T18:16:57Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Walker2d-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:16:49Z |
---
tags:
- Walker2d-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v4
type: Walker2d-v4
metrics:
- type: mean_reward
value: 1129.42 +/- 1251.17
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Walker2d-v4**
This is a trained model of a DDPG agent playing Walker2d-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Walker2d-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Walker2d-v4-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py
curl -OL https://huggingface.co/cleanrl/Walker2d-v4-ddpg_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Walker2d-v4-ddpg_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Walker2d-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'env_id': 'Walker2d-v4',
'exp_name': 'ddpg_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
nbiish/dqn-SpaceInvadersNoFrameskip-v4
|
nbiish
| 2023-06-28T18:06:47Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:06:09Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 556.00 +/- 103.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nbiish -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nbiish -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nbiish
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
cleanrl/Hopper-v4-ddpg_continuous_action_jax-seed1
|
cleanrl
| 2023-06-28T18:01:28Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:00:09Z |
---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 1696.13 +/- 547.20
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Hopper-v4**
This is a trained model of a DDPG agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Hopper-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'Hopper-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Sympan/q-FrozenLake-v1-4x4-noSlippery
|
Sympan
| 2023-06-28T18:01:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:01:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sympan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
andrewrreed/falcon-7b-guanaco-qlora-arr
|
andrewrreed
| 2023-06-28T18:00:02Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-28T15:27:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-guanaco-qlora-arr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-guanaco-qlora-arr
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.