modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
SRDdev/QABERT-small | SRDdev | 2023-06-21T15:00:00Z | 70 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-02-08T12:40:31Z | ---
datasets:
- squad_v2
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: question-answering
tags:
- question-answering
---
# QA-BERT
QA-BERT is a Question Answering Model. This model is a lighter version of any of the question-answering models out there.
## Dataset
The Stanford Question Answering Dataset (SQuAD) is a widely used benchmark dataset for the task of machine reading comprehension. It consists of over 100,000 question-answer pairs based on a set of Wikipedia articles. The goal is to train models that can answer questions based on their understanding of the given text passages. SQuAD has played a significant role in advancing the state-of-the-art in this field and remains a popular choice for researchers and practitioners alike.
Due to GPU limitations, this version is trained on `30k samples` from the Stanford Question Answering Dataset.
<details>
<summary><i>Structure of the Data Dictonary</i></summary>
<!--All you need is a blank line-->
{
"data":[
{
"title":"Article Title",
"paragraphs":[
{
"context":"The context text of the paragraph",
"qas":[
{
"question":"The question asked about the context",
"id":"A unique identifier for the question",
"answers":[
{
"text":"The answer to the question",
"answer_start":"The starting index of the answer in the context"
}
]
}
]
}
]
}
],
"version":"The version of the SQuAD dataset"
}
</details>
## Model
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained transformer-based model for natural language processing tasks such as question answering. BERT is fine-tuned for question answering by adding a linear layer on top of the pre-trained BERT representations to predict the start and end of the answer in the input context. BERT has achieved state-of-the-art results on multiple benchmark datasets, including the Stanford Question Answering Dataset (SQuAD). The fine-tuning process allows BERT to effectively capture the relationships between questions and answers and generate accurate answers.
<img src="https://imgs.search.brave.com/F8m-nwp6EIG5vq--OmJLrCDpIkuX6tEQ_kyFKQjlUTs/rs:fit:1200:1200:1/g:ce/aHR0cHM6Ly9ibG9n/LmdyaWRkeW5hbWlj/cy5jb20vY29udGVu/dC9pbWFnZXMvMjAy/MC8xMC9TbGljZS0x/OC5wbmc">
For more detail about this read [Understanding QABERT](https://github.com/SRDdev/AnswerMind)
## Inference
_Load model_
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
QAtokenizer = AutoTokenizer.from_pretrained("SRDdev/QABERT-small")
QAmodel = AutoModelForQuestionAnswering.from_pretrained("SRDdev/QABERT-small")
```
_context_
```text
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question-answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
```
_Build Pipeline_
```python
from transformers import pipeline
ask = pipeline("question-answering", model= QAmodel , tokenizer = QAtokenizer)
result = ask(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}'")
```
## Contributing
Pull requests are welcome. For major changes, please open an issue first
to discuss what you would like to change.
Please make sure to update tests as appropriate.
## Citations
```
@citation{ QA-BERT-small,
author = {Shreyas Dixit},
year = {2023},
url = {https://huggingface.co/SRDdev/QA-BERT-small}
}
```
|
34ronker/Dreamshaper-V5-baked-vae | 34ronker | 2023-06-21T14:45:25Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-25T08:58:52Z | ---
license: creativeml-openrail-m
---
|
swl-models/KawAICE-ICE | swl-models | 2023-06-21T14:39:33Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T14:21:51Z | ---
license: creativeml-openrail-m
---
|
LOGQS/a2c-PandaReachDense-v2 | LOGQS | 2023-06-21T14:37:20Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T14:34:32Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.73 +/- 0.82
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
joncam14/LunarLander-v2 | joncam14 | 2023-06-21T14:34:49Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T14:34:43Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -161.75 +/- 91.71
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'joncam14/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
EleutherAI/gpt-j-6b | EleutherAI | 2023-06-21T14:33:36Z | 256,005 | 1,477 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gptj",
"text-generation",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- EleutherAI/pile
---
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to
extract features useful for downstream tasks. The model is best at what it was
pretrained for however, which is generating text from a prompt.
### Out-of-scope use
GPT-J-6B is **not** intended for deployment without fine-tuning, supervision,
and/or moderation. It is not a in itself a product and cannot be used for
human-facing interactions. For example, the model may generate harmful or
offensive text. Please evaluate the risks associated with your particular use case.
GPT-J-6B was trained on an English-language only dataset, and is thus **not**
suitable for translation or generating text in other languages.
GPT-J-6B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means GPT-J-6B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend. |
orkg/orkgnlp-research-fields-classification | orkg | 2023-06-21T14:20:50Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2023-06-07T13:53:34Z | ---
license: mit
---
This Repository includes the files required to run the `Research Fields Classification` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service.
This model is converted into a [TorchScript](https://pytorch.org/docs/stable/jit.html) (ScriptModule) using [torch.jit.trace](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html). |
ziruo/vit-base-patch16-224-in21k-finetuned-lora-food101 | ziruo | 2023-06-21T14:08:36Z | 2 | 0 | peft | [
"peft",
"pytorch",
"tensorboard",
"vit",
"region:us"
] | null | 2023-06-21T14:02:20Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Voryoji/Shuichi | Voryoji | 2023-06-21T14:05:05Z | 2 | 0 | fairseq | [
"fairseq",
"deberta-v2",
"art",
"audio-to-audio",
"jv",
"ja",
"zh",
"dataset:QingyiSi/Alpaca-CoT",
"doi:10.57967/hf/0791",
"license:creativeml-openrail-m",
"region:us"
] | audio-to-audio | 2023-06-21T12:30:26Z | ---
license: creativeml-openrail-m
datasets:
- QingyiSi/Alpaca-CoT
language:
- jv
- ja
- zh
metrics:
- bleurt
library_name: fairseq
pipeline_tag: audio-to-audio
tags:
- art
--- |
arcane-impact/gpt_bigcode-santacoder-ggml | arcane-impact | 2023-06-21T13:50:23Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-06-21T13:40:47Z | ---
license: openrail
---
GGML format of [bigcode/gpt_bigcode-santacoder](https://huggingface.co/bigcode/gpt_bigcode-santacoder)
|
reinforceYrWay/ppo-Huggy | reinforceYrWay | 2023-06-21T13:49:25Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-21T13:49:21Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: reinforceYrWay/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Bodolaz/Unit-6.2 | Bodolaz | 2023-06-21T13:46:24Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T13:43:43Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.56 +/- 1.43
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Malaika/poca-SoccerTwos | Malaika | 2023-06-21T13:42:12Z | 0 | 0 | ml-agents | [
"ml-agents",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-06-21T13:42:12Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Malaika/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Doyle26/finetuning-emotion-model | Doyle26 | 2023-06-21T13:37:39Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T07:05:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-emotion-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-emotion-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2281
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3451 | 0.894 | 0.8888 |
| 0.5542 | 2.0 | 500 | 0.2281 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Nonenakahaku/Vilosarah2301 | Nonenakahaku | 2023-06-21T13:33:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T13:28:43Z | ---
license: creativeml-openrail-m
---
|
chocky18/alpaca7B-lora | chocky18 | 2023-06-21T13:24:29Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T13:24:28Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
chjooon/distilgpt2_fiscal | chjooon | 2023-06-21T13:21:55Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-21T13:09:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2_fiscal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2_fiscal
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9252 | 1.0 | 816 | 1.3275 |
| 1.3825 | 2.0 | 1632 | 1.1800 |
| 1.3156 | 3.0 | 2448 | 1.1431 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rodrigoclira/q-Taxi-v3 | rodrigoclira | 2023-06-21T13:20:15Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T13:20:13Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.38 +/- 2.45
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rodrigoclira/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rodrigoclira/q-FrozenLake-v1-4x4-noSlippery | rodrigoclira | 2023-06-21T13:13:56Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T13:13:54Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rodrigoclira/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
maksim2000153/bert-base-uncased-finetuned-ChemProt-corpus-re | maksim2000153 | 2023-06-21T13:09:45Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-30T10:38:45Z | ---
language:
- en
widget:
- text: "The functional protein contains 1160 << amino acids >> with a large central [[ mucin domain ]], three consensus sites for glycosaminoglycan attachment, two epidermal growth factor-like repeats, a putative hyaluronan-binding motif, and a potential transmembrane domain near the C-terminal."
example_title: "PART-OF"
- text: "<< Theophylline >> exposure resulted in a sustained increase in mRNA expression for CysS and [[ PDE3A ]], but PDE4D gene expression was unchanged."
example_title: "REG-POS"
- text: "These results suggested that << DMBT >> could inhibit invasion and angiogenesis by downregulation of [[ VEGF ]]and MMP-9, resulting from the inhibition of Akt pathway."
example_title: "REG-NEG"
- text: "Colonic cyclooxygenase-2 and << interkeukin-1beta >> mRNA and spinal c-FOS mRNA expression were significantly down-regulated by ATB-429, but not by [[ mesalamine ]]."
example_title: "NOT"
---
# Model Card
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [ChemProt corpus: BioCreative VI](https://biocreative.bioinformatics.udel.edu/news/corpora/chemprot-corpus-biocreative-vi/) dataset.
<!--
## Model Details
### Model Description
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Downstream Use [optional]
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
[More Information Needed]
### Training Procedure
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed]
#### Speeds, Sizes, Times [optional]
[More Information Needed]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
[More Information Needed]
#### Factors
[More Information Needed]
#### Metrics
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
[More Information Needed]
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
-->
|
nikolajking/Low_resource_translator_en_vt | nikolajking | 2023-06-21T13:06:10Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"translation",
"en",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-06-20T07:19:19Z | ---
language:
- en
- vi
metrics:
- bleu
pipeline_tag: translation
--- |
AravindVadlapudi02/swinv2-base-patch4-window12to16-192to256-22kto1k-ft-finetuned-swinv2-final-augmented-73 | AravindVadlapudi02 | 2023-06-21T13:05:23Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T13:05:22Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
hassansoliman/falcon-40b-qlora-utterance-adaptations_v5 | hassansoliman | 2023-06-21T13:03:56Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T13:03:09Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
mediattack/elbio | mediattack | 2023-06-21T12:54:36Z | 0 | 0 | null | [
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | 2023-06-21T12:50:00Z | ---
license: openrail
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shubham09/falcon-7b | Shubham09 | 2023-06-21T12:48:54Z | 5 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T12:48:08Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
keremnazliel/distilbert_squad_for_musique | keremnazliel | 2023-06-21T12:47:25Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-20T08:29:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert_squad_for_musique
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_squad_for_musique
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5452
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
pollner/distilhubert-finetuned-ravdess | pollner | 2023-06-21T12:36:48Z | 68 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:xbgoose/ravdess",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-06-21T10:33:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xbgoose/ravdess
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-ravdess
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-ravdess
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the RAVDESS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2810
- Accuracy: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7599 | 1.0 | 162 | 1.7350 | 0.3264 |
| 1.3271 | 2.0 | 324 | 1.1987 | 0.5972 |
| 0.8845 | 3.0 | 486 | 0.8824 | 0.7639 |
| 0.6083 | 4.0 | 648 | 0.5919 | 0.8403 |
| 0.4952 | 5.0 | 810 | 0.4469 | 0.8611 |
| 0.1386 | 6.0 | 972 | 0.3736 | 0.8681 |
| 0.1028 | 7.0 | 1134 | 0.3645 | 0.8819 |
| 0.053 | 8.0 | 1296 | 0.3079 | 0.9028 |
| 0.0149 | 9.0 | 1458 | 0.2723 | 0.9236 |
| 0.0154 | 10.0 | 1620 | 0.2810 | 0.9236 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Boristss/dqn-SpaceInvadersNoFrameskip-v4 | Boristss | 2023-06-21T12:36:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T12:35:47Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 613.00 +/- 161.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Boristss -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Boristss -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Boristss
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ISYS/Zachetnoe_Zadanie | ISYS | 2023-06-21T12:29:35Z | 0 | 0 | keras | [
"keras",
"region:us"
] | null | 2023-06-20T17:07:34Z | ---
library_name: keras
---
# КАРТОЧКА ИНС
# 1.Описание задачи
Дан датасет mnist по входному изображению определить остаток от деления этой цифры
на 3;
# 2.Послойная архитектура НС

# 3.Общее количество обучаемых параметров НС

# 4.Используемые алгоритмы оптимизации и функция ошибки
1. Использованная **функция ошибки** - **категориальная кроссэнтропия** для повышения качества нейронной сети
2. Использованный **алгоритм оптимизации - adam** из Keras

# 5.Размеры тренировочного, валидационного и тестового датасетов:
1. Размер тренировочного датаеста: **60.000** фото 28х28
2. Размер валидационного датасета: 10% от тренировочного = **6.000** фото 28х28
3. Размер тестового датасета: **10.000** фото 28х28
# 6.Результаты обучения модели
 |
soyisauce/distilbert-base-uncased-finetuned-amazon_review | soyisauce | 2023-06-21T12:28:22Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-21T12:18:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: distilbert-base-uncased-finetuned-amazon_review
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon_review
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kchen621/Reinforce-CartPole-v1 | kchen621 | 2023-06-21T12:28:16Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T12:28:07Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
soyisauce/output | soyisauce | 2023-06-21T12:26:29Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-21T12:25:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
optimum/roberta-large-finetuned-clinc | optimum | 2023-06-21T12:23:17Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-11T09:53:27Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9729032258064516
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1574
- Accuracy: 0.9729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 239 | 0.8113 | 0.9035 |
| No log | 2.0 | 478 | 0.2364 | 0.9548 |
| 1.7328 | 3.0 | 717 | 0.1760 | 0.9684 |
| 1.7328 | 4.0 | 956 | 0.1565 | 0.9723 |
| 0.0976 | 5.0 | 1195 | 0.1574 | 0.9729 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
antuuuu/seonn | antuuuu | 2023-06-21T12:21:11Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T12:20:22Z | ---
license: creativeml-openrail-m
---
|
NickyNicky/rwkv-4-169m-pile-oasst1-QLora-v3 | NickyNicky | 2023-06-21T12:17:19Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T12:17:14Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: True
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
joncam14/a2c-PandaReachDense-v2 | joncam14 | 2023-06-21T12:10:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T12:08:10Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.03 +/- 0.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TheBloke/PMC_LLAMA-7B-GGML | TheBloke | 2023-06-21T12:08:38Z | 0 | 10 | null | [
"medical",
"dataset:allenai/s2orc",
"license:other",
"region:us"
] | null | 2023-06-03T00:46:05Z | ---
inference: false
license: other
tags:
- medical
datasets:
- allenai/s2orc
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Chaoyi Wi's PMC_LLAMA 7B GGML
These files are GGML format model files for [Chaoyi Wi's PMC_LLAMA 7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| PMC_LLAMA-7B.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| PMC_LLAMA-7B.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| PMC_LLAMA-7B.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| PMC_LLAMA-7B.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| PMC_LLAMA-7B.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| PMC_LLAMA-7B.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| PMC_LLAMA-7B.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| PMC_LLAMA-7B.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| PMC_LLAMA-7B.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| PMC_LLAMA-7B.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| PMC_LLAMA-7B.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| PMC_LLAMA-7B.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| PMC_LLAMA-7B.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| PMC_LLAMA-7B.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m PMC_LLAMA-7B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Chaoyi Wi's PMC_LLAMA 7B
This repo contains PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in S2ORC dataset.
The model was trained with the following hyperparameters:
* Epochs: 5
* Batch size: 128
* Cutoff length: 512
* Learning rate: 2e-5
Each epoch we sample 512 tokens per paper for training.
The model can be loaded as following:
```
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B')
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B')
sentence = 'Hello, doctor'
batch = tokenizer(
sentence,
return_tensors="pt",
add_special_tokens=False
)
with torch.no_grad():
generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
print('model predict: ',tokenizer.decode(generated[0]))
```
|
LearnItAnyway/llava-polyglot-ko-1.3b-hf | LearnItAnyway | 2023-06-21T11:51:23Z | 14 | 4 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-18T06:32:22Z | ---
license: other
---
# Model Card for llava-polyglot-ko-1.3b-hf
## Model Description
` llava-polyglot-ko-1.3b-hf ` is a model based on polyglot-ko-13b.
We use llava for the vision question answering.
You can see [‘demo.py’]( https://github.com/LearnItAnyway/llava_gpt_neox/blob/main/demo.py) and [‘llava_gpt_neox.py’]( https://github.com/LearnItAnyway/llava_gpt_neox/blob/main/llava_gpt_neox.py).
Currently, the model has been trained on small vision question answer dataset (approx, 10k) with 1.3b (small) model.
## TODO
- Multi-turn chat based on the image
- Larger LLM
- More pretraining on for the vision-text adapter
## References
- [`LLaVA`](https://github.com/haotian-liu/LLaVA)
- [`polyglot korean`](https://huggingface.co/EleutherAI/polyglot-ko-1.3b)
|
agostini/ppo-LunarLander-v2 | agostini | 2023-06-21T11:49:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-19T21:29:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.63 +/- 21.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Rajaganapathy/my_awesome_qa_model | Rajaganapathy | 2023-06-21T11:37:52Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-21T11:28:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.1796 |
| 2.6572 | 2.0 | 500 | 1.6555 |
| 2.6572 | 3.0 | 750 | 1.6019 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chayan75/QA-bloom-560m | chayan75 | 2023-06-21T11:37:05Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-21T10:49:28Z | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: QA-bloom-560m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA-bloom-560m
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kkkkkgsp/my_awesome_fashion_model | kkkkkgsp | 2023-06-21T11:21:54Z | 42 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:fashion_mnist",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-19T11:26:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fashion_mnist
metrics:
- accuracy
model-index:
- name: my_awesome_fashion_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: fashion_mnist
type: fashion_mnist
config: fashion_mnist
split: train[:5000]
args: fashion_mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.785
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_fashion_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the fashion_mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8596
- Accuracy: 0.785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3171 | 0.99 | 62 | 1.1960 | 0.755 |
| 0.947 | 2.0 | 125 | 0.8729 | 0.801 |
| 0.8346 | 2.98 | 186 | 0.8596 | 0.785 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hassansoliman/falcon-40b-qlora-utterance-adaptations_v4 | hassansoliman | 2023-06-21T11:19:56Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T11:19:08Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
furusu/wd-v1-4-tagger-pytorch | furusu | 2023-06-21T11:18:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-06-21T09:07:44Z | ---
license: apache-2.0
---
wd-v1-4-tagger for pytorch.
# models
* wd-v1-4-vit-tagger-v2.ckpt:
converted from https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger-v2 |
reinforceYrWay/ppo-LunarLander-v2 | reinforceYrWay | 2023-06-21T11:17:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-20T13:33:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.50 +/- 21.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
joncam14/a2c-AntBulletEnv-v0 | joncam14 | 2023-06-21T11:16:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T11:14:50Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1721.08 +/- 51.98
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mu-mj/my_awesome_wnut_model | Mu-mj | 2023-06-21T11:05:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T11:01:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5353846153846153
- name: Recall
type: recall
value: 0.32252085264133457
- name: F1
type: f1
value: 0.4025448235974552
- name: Accuracy
type: accuracy
value: 0.9434397845325125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2749
- Precision: 0.5354
- Recall: 0.3225
- F1: 0.4025
- Accuracy: 0.9434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2606 | 0.5326 | 0.2956 | 0.3802 | 0.9413 |
| No log | 2.0 | 426 | 0.2749 | 0.5354 | 0.3225 | 0.4025 | 0.9434 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tianzhihui-isc/vit-base-patch16-224-in21k-finetuned-pokemon-classification | tianzhihui-isc | 2023-06-21T10:38:17Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:pokemon-classification",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-21T10:19:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pokemon-classification
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-pokemon-classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: pokemon-classification
type: pokemon-classification
config: full
split: train
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.8028747433264887
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-pokemon-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8734
- Accuracy: 0.8029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.9109 | 0.99 | 34 | 4.8193 | 0.1499 |
| 4.6433 | 1.99 | 68 | 4.5361 | 0.4661 |
| 4.2878 | 2.98 | 102 | 4.2546 | 0.6838 |
| 4.0792 | 4.0 | 137 | 4.0353 | 0.7680 |
| 3.8934 | 4.99 | 171 | 3.9120 | 0.7926 |
| 3.8294 | 5.96 | 204 | 3.8734 | 0.8029 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Varun1808/my_awesome_qa_model | Varun1808 | 2023-06-21T10:36:44Z | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-21T08:09:46Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Varun1808/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Varun1808/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.0493
- Validation Loss: 3.1955
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.4426 | 3.5441 | 0 |
| 3.2901 | 3.1955 | 1 |
| 3.0493 | 3.1955 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Curiolearner/q-Taxi-v3 | Curiolearner | 2023-06-21T10:34:44Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T09:06:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Curiolearner/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jiaheillu/dongwu | jiaheillu | 2023-06-21T10:27:46Z | 0 | 0 | null | [
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-21T10:26:54Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dongwu Dreambooth model trained by jiaheillu
Sample pictures of this concept:





|
abhishek-ignite/gpt-neo-1.3b-ignite-2 | abhishek-ignite | 2023-06-21T10:25:04Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T10:24:59Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
ckpt/controlnet_qrcode-control_v1p_sd15 | ckpt | 2023-06-21T10:21:43Z | 13 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"controlnet",
"image-to-image",
"en",
"license:openrail++",
"region:us"
] | image-to-image | 2023-06-21T10:19:31Z | ---
tags:
- stable-diffusion
- controlnet
- image-to-image
license: openrail++
language:
- en
library_name: diffusers
pipeline_tag: image-to-image
---
# QR Code Conditioned ControlNet Models for Stable Diffusion 1.5

## Model Description
This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v1.5.
The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, this 1.5 version model was also trained on the same dataset for those who are using the older version.
## How to use with Diffusers
```bash
pip -q install diffusers transformers accelerate torch xformers
```
```python
import torch
from PIL import Image
from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler
from diffusers.utils import load_image
controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v1p_sd15",
torch_dtype=torch.float16)
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=controlnet,
safety_checker=None,
torch_dtype=torch.float16
)
pipe.enable_xformers_memory_efficient_attention()
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
# play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image
# qr code image
source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png")
# initial image, anything
init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg")
condition_image = resize_for_condition_image(source_image, 768)
init_image = resize_for_condition_image(init_image, 768)
generator = torch.manual_seed(123121231)
image = pipe(prompt="a bilboard in NYC with a qrcode",
negative_prompt="ugly, disfigured, low quality, blurry, nsfw",
image=init_image,
control_image=condition_image,
width=768,
height=768,
guidance_scale=20,
controlnet_conditioning_scale=1.5,
generator=generator,
strength=0.9,
num_inference_steps=150,
)
image.images[0]
```
## Performance and Limitations
These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).**
To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork.
## Installation
The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application.
For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail.
Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell. |
readerbench/RoBERT-small | readerbench | 2023-06-21T10:10:03Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | Model card for RoBERT-small
---
language:
- ro
---
# RoBERT-small
## Pretrained BERT model for Romanian
Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective.
It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: **RoBERT-small**, RoBERT-base and RoBERT-large, all versions uncased.
| Model | Weights | L | H | A | MLM accuracy | NSP accuracy |
|----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:|
| *RoBERT-small* | *19M* | *12* | *256* | *8* | *0.5363* | *0.9687* |
| RoBERT-base | 114M | 12 | 768 | 12 | 0.6511 | 0.9802 |
| RoBERT-large | 341M | 24 | 1024 | 24 | 0.6929 | 0.9843 |
All models are available:
* [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small)
* [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base)
* [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large)
#### How to use
```python
# tensorflow
from transformers import AutoModel, AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-small")
model = TFAutoModel.from_pretrained("readerbench/RoBERT-small")
inputs = tokenizer("exemplu de propoziție", return_tensors="tf")
outputs = model(inputs)
# pytorch
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-small")
model = AutoModel.from_pretrained("readerbench/RoBERT-small")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
```
## Training data
The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process.
| Corpus | Words | Sentences | Size (GB)|
|-----------|:---------:|:---------:|:--------:|
| Oscar | 1.78B | 87M | 10.8 |
| RoTex | 240M | 14M | 1.5 |
| RoWiki | 50M | 2M | 0.3 |
| **Total** | **2.07B** | **103M** | **12.6** |
## Downstream performance
### Sentiment analysis
We report Macro-averaged F1 score (in %)
| Model | Dev | Test |
|------------------|:--------:|:--------:|
| multilingual-BERT| 68.96 | 69.57 |
| XLM-R-base | 71.26 | 71.71 |
| BERT-base-ro | 70.49 | 71.02 |
| *RoBERT-small* | *66.32* | *66.37* |
| RoBERT-base | 70.89 | 71.61 |
| RoBERT-large | **72.48**| **72.11**|
### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification
We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %).
| Model | Dialect Classification | MD to RO | RO to MD |
|-------------------|:----------------------:|:--------:|:--------:|
| 2-CNN + SVM | 93.40 | 65.09 | 75.21 |
| Char+Word SVM | 96.20 | 69.08 | 81.93 |
| BiGRU | 93.30 | **70.10**| 80.30 |
| multilingual-BERT | 95.34 | 68.76 | 78.24 |
| XLM-R-base | 96.28 | 69.93 | 82.28 |
| BERT-base-ro | 96.20 | 69.93 | 78.79 |
| *RoBERT-small* | *95.67* | *69.01* | *80.40* |
| RoBERT-base | 97.39 | 68.30 | 81.09 |
| RoBERT-large | **97.78** | 69.91 | **83.65**|
### Diacritics Restoration
Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %.
| Model | word level | char level |
|-----------------------------|:----------:|:----------:|
| BiLSTM | 99.42 | - |
| CharCNN | 98.40 | 99.65 |
| CharCNN + multilingual-BERT | 99.72 | 99.94 |
| CharCNN + XLM-R-base | 99.76 | **99.95** |
| CharCNN + BERT-base-ro | **99.79** | **99.95** |
| *CharCNN + RoBERT-small* | *99.73* | *99.94* |
| CharCNN + RoBERT-base | 99.78 | **99.95** |
| CharCNN + RoBERT-large | 99.76 | **99.95** |
### BibTeX entry and citation info
```bibtex
@inproceedings{masala2020robert,
title={RoBERT--A Romanian BERT Model},
author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6626--6637},
year={2020}
}
```
|
readerbench/RoBERT-base | readerbench | 2023-06-21T10:08:18Z | 5,254 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | Model card for RoBERT-base
---
language:
- ro
---
# RoBERT-base
## Pretrained BERT model for Romanian
Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective.
It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: RoBERT-small, **RoBERT-base** and RoBERT-large, all versions uncased.
| Model | Weights | L | H | A | MLM accuracy | NSP accuracy |
|----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:|
| RoBERT-small | 19M | 12 | 256 | 8 | 0.5363 | 0.9687 |
| *RoBERT-base* | *114M* | *12* | *768* | *12* | *0.6511* | *0.9802* |
| RoBERT-large | 341M | 24 | 1024 | 24 | 0.6929 | 0.9843 |
All models are available:
* [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small)
* [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base)
* [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large)
#### How to use
```python
# tensorflow
from transformers import AutoModel, AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base")
model = TFAutoModel.from_pretrained("readerbench/RoBERT-base")
inputs = tokenizer("exemplu de propoziție", return_tensors="tf")
outputs = model(inputs)
# pytorch
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base")
model = AutoModel.from_pretrained("readerbench/RoBERT-base")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
```
## Training data
The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process.
| Corpus | Words | Sentences | Size (GB)|
|-----------|:---------:|:---------:|:--------:|
| Oscar | 1.78B | 87M | 10.8 |
| RoTex | 240M | 14M | 1.5 |
| RoWiki | 50M | 2M | 0.3 |
| **Total** | **2.07B** | **103M** | **12.6** |
## Downstream performance
### Sentiment analysis
We report Macro-averaged F1 score (in %)
| Model | Dev | Test |
|------------------|:--------:|:--------:|
| multilingual-BERT| 68.96 | 69.57 |
| XLM-R-base | 71.26 | 71.71 |
| BERT-base-ro | 70.49 | 71.02 |
| RoBERT-small | 66.32 | 66.37 |
| *RoBERT-base* | *70.89* | *71.61* |
| RoBERT-large | **72.48**| **72.11**|
### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification
We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %).
| Model | Dialect Classification | MD to RO | RO to MD |
|-------------------|:----------------------:|:--------:|:--------:|
| 2-CNN + SVM | 93.40 | 65.09 | 75.21 |
| Char+Word SVM | 96.20 | 69.08 | 81.93 |
| BiGRU | 93.30 | **70.10**| 80.30 |
| multilingual-BERT | 95.34 | 68.76 | 78.24 |
| XLM-R-base | 96.28 | 69.93 | 82.28 |
| BERT-base-ro | 96.20 | 69.93 | 78.79 |
| RoBERT-small | 95.67 | 69.01 | 80.40 |
| *RoBERT-base* | *97.39* | *68.30* | *81.09* |
| RoBERT-large | **97.78** | 69.91 | **83.65**|
### Diacritics Restoration
Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %.
| Model | word level | char level |
|-----------------------------|:----------:|:----------:|
| BiLSTM | 99.42 | - |
| CharCNN | 98.40 | 99.65 |
| CharCNN + multilingual-BERT | 99.72 | 99.94 |
| CharCNN + XLM-R-base | 99.76 | **99.95** |
| CharCNN + BERT-base-ro | **99.79** | **99.95** |
| CharCNN + RoBERT-small | 99.73 | 99.94 |
| *CharCNN + RoBERT-base* | *99.78* | **99.95** |
| CharCNN + RoBERT-large | 99.76 | **99.95** |
### BibTeX entry and citation info
```bibtex
@inproceedings{masala2020robert,
title={RoBERT--A Romanian BERT Model},
author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6626--6637},
year={2020}
}
```
|
SMKim/final_project | SMKim | 2023-06-21T09:48:47Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T09:39:39Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: SMKim/final_project
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SMKim/final_project
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4170
- Validation Loss: 0.3828
- Train Accuracy: 0.8571
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9425, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4170 | 0.3828 | 0.8571 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
anhnd16/simple_lora | anhnd16 | 2023-06-21T09:38:52Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T09:38:42Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
sim11som11/t5_results1 | sim11som11 | 2023-06-21T09:35:59Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-20T10:08:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_results1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_results1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 57 | 0.2983 |
| No log | 2.0 | 114 | 0.2797 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
joncam14/ppo-PyramidsRND | joncam14 | 2023-06-21T09:34:26Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-06-21T09:34:22Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: joncam14/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Boristss/taxi_tutut | Boristss | 2023-06-21T09:29:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T09:29:48Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_tutut
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Boristss/taxi_tutut", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JANCARLOS/XALIC | JANCARLOS | 2023-06-21T09:27:30Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"finance",
"fill-mask",
"license:openrail",
"region:us"
] | fill-mask | 2023-06-21T09:22:20Z | ---
license: openrail
metrics:
- bertscore
library_name: adapter-transformers
pipeline_tag: fill-mask
tags:
- finance
--- |
Boristss/q-FrozenLake-v1-8x8-Slippery | Boristss | 2023-06-21T09:26:14Z | 0 | 0 | null | [
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T09:24:55Z | ---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.14 +/- 0.35
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Boristss/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
findnitai/t5-hinglish-translator | findnitai | 2023-06-21T09:22:21Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"hig",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-21T08:17:42Z | ---
language:
- en
- hig
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-hinglish-translator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-hinglish-translator
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the [findnitai/english-to-hinglish](https://huggingface.co/datasets/findnitai/english-to-hinglish) dataset.
Sample use:
1. "Translate English to Hinglish : How is the weather?"
2. "Translate English to Hinglish : Turn the alarm off"
3. "Translate English to Hinglish : Set a timer for 5 minutes"
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
- Loss : 2.71
|
KPrashanth/ppo-LunarLander-v2 | KPrashanth | 2023-06-21T09:20:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T09:20:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.77 +/- 22.81
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kavlata/bert-base-uncased-finetuned-swag | kavlata | 2023-06-21T09:17:14Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2023-06-21T09:15:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7193
- Accuracy: 0.1400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 38 | 1.7806 | 0.1867 |
| No log | 2.0 | 76 | 1.7043 | 0.2467 |
| No log | 3.0 | 114 | 1.7193 | 0.1400 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gegre/q-FrozenLake-v1-4x4-noSlippery | gegre | 2023-06-21T09:16:38Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T09:16:36Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gegre/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ellemac/ppo-Huggy | ellemac | 2023-06-21T09:15:00Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-21T09:14:50Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ellemac/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/bert-base-emotion_48 | gokuls | 2023-06-21T09:02:56Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T08:47:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bert-base-emotion_48
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8903
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-emotion_48
This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3294
- Accuracy: 0.8903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7185 | 1.0 | 250 | 0.3644 | 0.8735 |
| 0.2792 | 2.0 | 500 | 0.3189 | 0.887 |
| 0.1922 | 3.0 | 750 | 0.3294 | 0.893 |
| 0.1311 | 4.0 | 1000 | 0.3895 | 0.891 |
| 0.098 | 5.0 | 1250 | 0.3870 | 0.8905 |
| 0.0702 | 6.0 | 1500 | 0.4275 | 0.8895 |
| 0.05 | 7.0 | 1750 | 0.5372 | 0.886 |
| 0.0332 | 8.0 | 2000 | 0.5702 | 0.8905 |
| 0.023 | 9.0 | 2250 | 0.5759 | 0.8915 |
| 0.0176 | 10.0 | 2500 | 0.5959 | 0.891 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
j5ng/et5-formal-convertor | j5ng | 2023-06-21T09:02:42Z | 16 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-21T08:39:24Z | ---
license: apache-2.0
language:
- ko
pipeline_tag: text2text-generation
---
# korean Formal Convertor Using Deep Learning
존댓말과 반말은 한국어에서만 존재합니다, 본 모델은 반말(informal)을 존댓말(formal)로 바꿔주는 변환기(convertor) 입니다. <br>
*확보한 존댓말 데이터셋에는 "해요체"와 "합쇼체" 두 종류가 존재했지만 본 모델은 "해요체"로 통일하여 변환하기로 결정했습니다.
|합쇼체|*해요체|
|------|---|
|안녕하십니까.|안녕하세요.|
|좋은 아침입니다.|좋은 아침이에요.|
|바쁘시지 않았으면 좋겠습니다.|바쁘시지 않았으면 좋겠어요.|
## 배경
- 이전에 존댓말과 반말을 구분하는 분류기(https://github.com/jongmin-oh/korean-formal-classifier) 를 학습했습니다.<br>
분류기로 말투를 나눠 사용하려했지만, 상대적으로 존댓말의 비중이 적었고 반말을 존댓말로 바꾸어 존댓말 데이터의 비중을 늘리기위해 만들게 되었습니다.
## 한국어 존댓말 변환기
- 존댓말 변환기는 T5모델 아키텍쳐를 기반으로한 Text2Text generation Task를 수행함으로 반말을 존댓말로 변환하여 사용할 수 있습니다.
- 바로 사용하실 분들은 밑에 예제 코드 참고해서 huggingFace 모델('j5ng/et5-formal-convertor') 다운받아 사용하실 수 있습니다.
## Base on PLM model(ET5)
- ETRI(https://aiopen.etri.re.kr/et5Model)
## Base on Dataset
- AI허브(https://www.aihub.or.kr/) : 한국어 어체 변환 코퍼스
1. KETI 일상오피스 대화 1,254 문장
2. 수동태깅 병렬데이터
- 스마일게이트 말투 데이터 셋(korean SmileStyle Dataset)
### Preprocessing
1. 반말/존댓말 데이터 분리("해요체"만 분리)
- 스마일게이트 데이터에서 (['formal','informal']) 칼럼만 사용
- 수동태깅 병렬데이터에서 ["*.ban", "*.yo"] txt 파일만 사용
- KETI 일상오피스 데이터에서(["반말","해요체"]) 칼럼만 사용
2. 데이터 셋 병합(3가지 데이터 셋 병합)
3. 마침표(.)와 쉼표(,)제거
4. 반말(informal) 칼럼 중복 제거 : 1632개 중복데이터 제거
### 최종 학습데이터 예시
|informal|formal|
|------|---|
|응 고마워|네 감사해요|
|나도 그 책 읽었어 굉장히 웃긴 책이였어|저도 그 책 읽었습니다 굉장히 웃긴 책이였어요|
|미세먼지가 많은 날이야|미세먼지가 많은 날이네요|
|괜찮겠어?|괜찮으실까요?|
|아니야 회의가 잠시 뒤에 있어 준비해줘|아니에요 회의가 잠시 뒤에 있어요 준비해주세요|
#### total : 14,992 쌍
***
## How to use
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
# T5 모델 로드
model = T5ForConditionalGeneration.from_pretrained("j5ng/et5-formal-convertor")
tokenizer = T5Tokenizer.from_pretrained("j5ng/et5-formal-convertor")
device = "cuda:0" if torch.cuda.is_available() else "cpu"
# device = "mps:0" if torch.cuda.is_available() else "cpu" # for mac m1
model = model.to(device)
# 예시 입력 문장
input_text = "나 진짜 화났어 지금"
# 입력 문장 인코딩
input_encoding = tokenizer("존댓말로 바꿔주세요: " + input_text, return_tensors="pt")
input_ids = input_encoding.input_ids.to(device)
attention_mask = input_encoding.attention_mask.to(device)
# T5 모델 출력 생성
output_encoding = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_length=128,
num_beams=5,
early_stopping=True,
)
# 출력 문장 디코딩
output_text = tokenizer.decode(output_encoding[0], skip_special_tokens=True)
# 결과 출력
print(output_text) # 저 진짜 화났습니다 지금.
```
***
## With Transformer Pipeline
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer, pipeline
model = T5ForConditionalGeneration.from_pretrained('j5ng/et5-formal-convertor')
tokenizer = T5Tokenizer.from_pretrained('j5ng/et5-formal-convertor')
typos_corrector = pipeline(
"text2text-generation",
model=model,
tokenizer=tokenizer,
device=0 if torch.cuda.is_available() else -1,
framework="pt",
)
input_text = "널 가질 수 있을거라 생각했어"
output_text = typos_corrector("존댓말로 바꿔주세요: " + input_text,
max_length=128,
num_beams=5,
early_stopping=True)[0]['generated_text']
print(output_text) # 당신을 가질 수 있을거라 생각했습니다.
```
## Thanks to
존댓말 변환기의 학습은 인공지능산업융합사업단(AICA)의 GPU 리소스를 지원받아 학습되었습니다.
|
IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1 | IDEA-CCNL | 2023-06-21T09:01:46Z | 1,495 | 19 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"zh",
"arxiv:2210.08590",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-01T02:43:52Z | ---
license: gpl-3.0
language:
- en
- zh
inference: false
---
# Ziya-LLaMA-13B-Pretrain-v1
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
(LLaMA权重的许可证限制,我们无法直接发布完整的模型权重,用户需要参考[使用说明](#-使用-usage-)进行合并)
# 姜子牙系列模型
- [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)
- [Ziya-LLaMA-7B-Reward](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-7B-Reward)
- [Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1)
- [Ziya-BLIP2-14B-Visual-v1](https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1)
## 简介 Brief Introduction
Ziya-LLaMA-13B-Pretrain-v1 是基于LLaMa的130亿参数大规模预训练模型,针对中文分词优化,并完成了中英文 110B tokens 的增量预训练,进一步提升了中文生成和理解能力。目前姜子牙通用大模型 [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1) 在本模型上,进一步完成了多任务有监督微调和人类反馈学习阶段的训练过程,具备翻译,编程,文本分类,信息抽取,摘要,文案生成,常识问答和数学计算等能力。
**用户须知**:为了遵循 Meta 发布的 LLaMA 模型许可,本模型发布的是训练前后的权重增量,最终模型可方便地通过脚本获得(参考 Usage 中的步骤)。
The Ziya-LLaMA-13B-Pretrain-v1 is a large-scale pre-trained model based on LLaMA with 13 billion parameters. We optimizes LLaMAtokenizer on chinese, and incrementally train 110 billion tokens of data based on LLaMa-13B model, which significantly improved the understanding and generation ability on Chinese. Based on the Ziya-LLaMA-13B-Pretrain-v1, the [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1) is furtherly trained with 2 stages: multi-task supervised fine-tuning (SFT), and human feedback learning (RM, PPO). The Ziya-LLaMA-13B-v1 has the ability to perform tasks such as translation, programming, text classification, information extraction, summarization, copywriting, common sense Q&A, and mathematical calculation.
**README**: To follow the License of LLaMA released by Meta, we only release the incremental weights after continual pretraining. The final model Ziya-LLaMA-13B-Pretrain-v1 could be easily got via the script (refer to Usage).
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | AGI模型 | 姜子牙 Ziya | LLaMA | 13B | English&Chinese |
## 模型信息 Model Information
### 继续预训练 Continual Pretraining
原始数据包含英文和中文,其中英文数据来自openwebtext、Books、Wikipedia和Code,中文数据来自清洗后的悟道数据集、自建的中文数据集。在对原始数据进行去重、模型打分、数据分桶、规则过滤、敏感主题过滤和数据评估后,最终得到125B tokens的有效数据。
为了解决LLaMA原生分词对中文编解码效率低下的问题,我们在LLaMA词表的基础上增加了7k+个常见中文字,通过和LLaMA原生的词表去重,最终得到一个39410大小的词表,并通过复用Transformers里LlamaTokenizer来实现了这一效果。
在增量训练过程中,我们使用了160张40GB的A100,采用2.6M tokens的训练集样本数量和FP 16的混合精度,吞吐量达到118 TFLOP per GPU per second。因此我们能够在8天的时间里在原生的LLaMA-13B模型基础上,增量训练110B tokens的数据。据我们所知,这也是至今为止LLaMA-13B上最大规模增量训练。
训练期间,虽然遇到了机器宕机、底层框架bug、loss spike等各种问题,但我们通过快速调整,保证了增量训练的稳定性。我们也放出训练过程的loss曲线,让大家了解可能出现的问题。
The original data contains both English and Chinese, with English data from openwebtext, Books, Wikipedia, and Code, and Chinese data from the cleaned Wudao dataset and self-built Chinese dataset. After deduplication, model scoring, data bucketing, rule filtering, sensitive topic filtering, and data evaluation, we finally obtained 125 billion tokens of data.
To address the issue of low efficiency in Chinese encoding and decoding caused by the tokenizer of LLaMa, we added 8,000 commonly used Chinese characters to the LLaMa SentencePiece vocabulary. Deduplicating with the original LLaMa vocabulary, we finally obtained a vocabulary of size 39,410. We achieved this by reusing the LlamaTokenizer in Transformers.
During the incremental training process, we used 160 A100s with a total of 40GB memory, using a training dataset with 2.6 million tokens and mixed precision of FP16. The throughput reached 118 TFLOP per GPU per second. As a result, we were able to incrementally train 110 billion tokens of data based on LLaMa-13B model in just 8 days.As far as we know, it is the largest increamental training on LLaMA-13B so far.
Throughout the training process, we encountered various issues such as machine crashes, underlying framework bugs, and loss spikes. However, we ensured the stability of the incremental training by making rapid adjustments. We have also released the loss curve during the training process to help everyone understand the potential issues that may arise.
<img src="https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1/resolve/main/loss.png" width=1000 height=600>
### 效果评估 Performance
以下是 Ziya-LLaMA-13B-Pertrain-v1 和继续训练前的LLaMA 模型在英文公开评测 [HeLM](https://crfm.stanford.edu/helm/latest/) 和中文多项选择评测集上的评估效果对比。
Here are comparisons of the Ziya-LLaMA-13B-Pretrain-v1 model and the LLaMA model before continual pre-training, evaluated on the English benchmark (HeLM), and our Chinese multiple-choice evaluation datasets.
<img src="https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1/resolve/main/ziya_en_eval.png" width=2542 height=1045>
| Model | Meanwin_rate | MMLU | BoolQ | NarrativeQA | NaturalQuestion(closed-book) | NaturalQuestion(open-book) | QuAC | TruthfulQA | IMDB |
| -------------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| LLaMA-13B | 0.500 | 0.424 | 0.718 | 0.440 | 0.349 | 0.591 | 0.318 | 0.326 | 0.487 |
| Ziya-LLaMA-13B-Pretrain-v1 | 0.650 | 0.433 | 0.753 | 0.445 | 0.348 | 0.528 | 0.335 | 0.249 | 0.497 |
<img src="https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1/resolve/main/ziya_zh_eval.png" width=2340 height=1523>
| 模型 | incontext | c3 | 常识 | 语文 | 数学 | 英语 | 物理 | 化学 | 生物 | 历史 | 政治 | 地理 |
|-------------------------|------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| LLaMA-13B | 0-shot | 0.4817 | 0.3088 | 0.2674 | 0.2882 | 0.3399 | 0.2581 | 0.2478 | 0.2271 | 0.3380 | 0.3275 | 0.296 |
| Ziya-LLaMA-13B-Pretrain-v1 | 0-shot | 0.5354 | 0.3373 | 0.2925 | 0.3059 | 0.3428 | 0.2903 | 0.2655 | 0.3215 | 0.4190 | 0.4123 | 0.4425 |
| LLaMA-13B | 5-shot | 0.5314 | 0.3586 | 0.2813 | 0.2912 | 0.4476 | 0.2939 | 0.2301 | 0.2330 | 0.3268 | 0.3187 | 0.3103 |
| Ziya-LLaMA-13B-Pretrain-v1 | 5-shot | 0.6037 | 0.4330 | 0.2802 | 0.2912 | 0.4363 | 0.2975 | 0.2802 | 0.3422 | 0.4358 | 0.4357 | 0.4540 |
<!--
<img src="" width=1000 height=600> -->
## <span id="jump"> 使用 Usage </span>
由于LLaMA权重的许可限制,该模型不能用于商业用途,请严格遵守LLaMA的使用政策。考虑到LLaMA权重的许可证限制,我们无法直接发布完整的模型权重。因此,我们使用了[FastChat开源工具](https://github.com/lm-sys/FastChat/blob/main/fastchat/model/apply_delta.py)作为基础,并对其进行了进一步的优化。我们计算并发布了Ziya-LLaMA-13B-v1权重与原始LLaMA权重之间的差值。用户可以按照以下步骤操作以获得Ziya-LLaMA-13B-v1完整权重,具体步骤如下:
Step 1:获取[LLaMA](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform)权重并转成Hugging Face Transformers模型格式,可参考转换[脚本](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)(若已经有huggingface权重则跳过)
```
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 13B --output_dir /output/path
```
Step 2:下载Ziya-LLaMA-13B-v1的delta权重以及step 1中转换好的原始LLaMA权重,使用如下脚本转换:https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/utils/apply_delta.py.
```
python3 -m apply_delta --base ~/model_weights/llama-13b --target ~/model_weights/Ziya-LLaMA-13B --delta ~/model_weights/Ziya-LLaMA-13B-v1
```
Step 3: 加载step 2得到的模型推理
```python3
from transformers import AutoTokenizer
from transformers import LlamaForCausalLM
import torch
device = torch.device("cuda")
query="帮我写一份去西安的旅游计划"
model = LlamaForCausalLM.from_pretrained(ckpt, torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(ckpt)
inputs = query.strip()
input_ids = tokenizer(inputs, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(
input_ids,
max_new_tokens=1024,
do_sample = True,
top_p = 0.85,
temperature = 1.0,
repetition_penalty=1.,
eos_token_id=2,
bos_token_id=1,
pad_token_id=0)
output = tokenizer.batch_decode(generate_ids)[0]
print(output)
```
Step 1: Obtain the [LLaMA](https://huggingface.co/docs/transformers/main/en/model_doc/llama#overview) weights and convert them into the Hugging Face Transformers format. You can refer to the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py) (skip this step if you already have the Hugging Face weights).
```
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 13B --output_dir /output/path
```
Step 2: Download the delta weights for Ziya-LLaMA-13B-v1 and the pre-converted original LLaMA weights from step 1. Use the following script for conversion: https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/utils/apply_delta.py
```
python3 -m apply_delta --base ~/model_weights/llama-13b --target ~/model_weights/Ziya-LLaMA-13B --delta ~/model_weights/Ziya-LLaMA-13B-v1(huggingface下载)
```
Step 3: Load the model obtained in Step 2 for inference.
## 微调示例 Finetune Example
Refer to [ziya_finetune](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/ziya_llama)
## 推理量化示例 Inference & Quantization Example
Refer to [ziya_inference](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/ziya_inference)
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2210.08590):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2210.08590):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
欢迎引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
IDEA-CCNL/Ziya-LLaMA-13B-v1.1 | IDEA-CCNL | 2023-06-21T09:00:14Z | 18 | 53 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"zh",
"arxiv:2210.08590",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-07T02:32:30Z | ---
license: gpl-3.0
language:
- en
- zh
inference: false
---
# Ziya-LLaMA-13B-v1.1
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
(LLaMA权重的许可证限制,我们无法直接发布完整的模型权重,用户需要参考[使用说明](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)进行合并)
# 姜子牙系列模型
- [Ziya-LLaMA-13B-v1.1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1)
- [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)
- [Ziya-LLaMA-7B-Reward](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-7B-Reward)
- [Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1)
- [Ziya-BLIP2-14B-Visual-v1](https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1)
## 简介 Brief Introduction
我们对Ziya-LLaMA-13B-v1模型进行继续优化,推出开源版本Ziya-LLaMA-13B-v1.1。通过调整微调数据的比例和采用更优的强化学习策略,本版本在问答准确性、数学能力以及安全性等方面得到了提升,详细能力分析如下图所示。
We have further optimized the Ziya-LLaMA-13B-v1 model and released the open-source version Ziya-LLaMA-13B-v1.1. By adjusting the proportion of fine-tuning data and adopting a better reinforcement learning strategy, this version has achieved improvements in question-answering accuracy, mathematical ability, and safety, as shown in the following figure in detail.
<img src="https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1/resolve/main/pk.png" width=1000 height=600>
## 软件依赖
```
pip install torch==1.12.1 tokenizers==0.13.3 git+https://github.com/huggingface/transformers
```
## <span id="jump"> 使用 Usage </span>
请参考[Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)的使用说明。
注意:合并后默认会生成3个.bin文件,md5值依次为**59194d10b1553d66131d8717c9ef03d6、cc14eebe2408ddfe06b727b4a76e86bb、4a8495d64aa06aee96b5a1cc8cc55fa7**。
Please refer to the usage for [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1).
Note: After merging, three .bin files will be generated by default, with MD5 values of **59194d10b1553d66131d8717c9ef03d6, cc14eebe2408ddfe06b727b4a76e86bb, and 4a8495d64aa06aee96b5a1cc8cc55fa7**, respectively.
## 微调示例 Finetune Example
Refer to [ziya_finetune](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/ziya_llama)
## 推理量化示例 Inference & Quantization Example
Refer to [ziya_inference](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/ziya_inference)
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2210.08590):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2210.08590):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
欢迎引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
gokuls/bert-base-emotion_24 | gokuls | 2023-06-21T08:57:27Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T08:33:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bert-base-emotion_24
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8611
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-emotion_24
This model is a fine-tuned version of [gokuls/bert_base_24](https://huggingface.co/gokuls/bert_base_24) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5612
- Accuracy: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.736 | 1.0 | 250 | 0.4242 | 0.8565 |
| 0.3013 | 2.0 | 500 | 0.3314 | 0.8845 |
| 0.2014 | 3.0 | 750 | 0.3442 | 0.8905 |
| 0.1392 | 4.0 | 1000 | 0.3276 | 0.8915 |
| 0.1072 | 5.0 | 1250 | 0.3833 | 0.89 |
| 0.0783 | 6.0 | 1500 | 0.4205 | 0.8895 |
| 0.0559 | 7.0 | 1750 | 0.5287 | 0.8865 |
| 0.0378 | 8.0 | 2000 | 0.5459 | 0.8865 |
| 0.027 | 9.0 | 2250 | 0.5612 | 0.8925 |
| 0.02 | 10.0 | 2500 | 0.5601 | 0.8915 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
msy127/distilbert-base-uncased-finetuned-emotion-th-2 | msy127 | 2023-06-21T08:54:47Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T07:10:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-th-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-th-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2177
- Accuracy: 0.921
- F1: 0.9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8194 | 1.0 | 250 | 0.3085 | 0.909 | 0.9068 |
| 0.2471 | 2.0 | 500 | 0.2177 | 0.921 | 0.9213 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
joncam14/ppo-SnowballTarget | joncam14 | 2023-06-21T08:53:13Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-06-21T08:53:11Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: joncam14/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yuvalkirstain/sd15_dog | yuvalkirstain | 2023-06-21T08:49:03Z | 3 | 0 | diffusers | [
"diffusers",
"if",
"if-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-inpainting",
"base_model:finetune:runwayml/stable-diffusion-inpainting",
"license:creativeml-openrail-m",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] | text-to-image | 2023-06-20T05:46:20Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-inpainting
instance_prompt: a photo of sks dog
tags:
- if
- if-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - yuvalkirstain/sd15_dog
This is a dreambooth model derived from runwayml/stable-diffusion-inpainting. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.










DreamBooth for the text encoder was enabled: True.
|
wordcab/whisper-large-int8-he | wordcab | 2023-06-21T08:47:20Z | 1 | 0 | transformers | [
"transformers",
"he",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-20T15:35:55Z | ---
license: apache-2.0
language:
- he
---
This is a ctranslate2 `int8` version of the [Shiry/whisper-large-v2-he](https://huggingface.co/Shiry/whisper-large-v2-he) model. |
akira225/deberta-v3-base-ECE | akira225 | 2023-06-21T08:46:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3-base",
"deberta-v3",
"deberta",
"token-classification",
"emotion",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T02:18:24Z | ---
license: apache-2.0
language: en
tags:
- deberta-v3-base
- deberta-v3
- deberta
- token-classification
- emotion
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for DeBERTa-v3-base-ECE
This is [DeBERTa-v3](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) fine-tuned for Emotion Cause Extraction (ECE) task.
For input text i.e. a sequence of tokens containing a situation with emotional coloring, it is necessary to determine the subset of which tokens justify the emotional state of the speaker. Formally speaking, it is convenient to look at the problem as a binary token classification, where one means that the corresponding token belongs to the desired subset.
## Training
Code use to train this model avaliable on my [GitHub](https://github.com/akira225/emotion-cause-detection)
## Evaluation
Has following results on [EmoCause](https://github.com/skywalker023/focused-empathy) and [EmpatheticDialodues](https://github.com/facebookresearch/EmpatheticDialogues):
| Accuracy | Top-1 Recall | Top-3 Recall | Top-5 Recall |
| ------------- | ------------- | ------------- | ------------- |
| 0.59 | 0.249 | 0.623 | 0.806 |
</details> |
trustvare/trustvare-email-duplicate-remover-tool | trustvare | 2023-06-21T08:46:23Z | 0 | 0 | null | [
"region:us"
] | null | 2023-06-21T07:59:14Z | A robust application called TrustVare Email Duplicate Remover is made to effectively remove duplicate emails from your mailbox. It offers a full-featured approach to clearing out the clutter in your email database and streamlining your communication procedures. This software is appropriate for both individual users and companies of all sizes because to its user-friendly interface and sophisticated algorithms. To find and delete duplicate emails, the tool scans your whole email account or a selected folder. parameters,analyze to find identical or similar emails, it uses advanced algorithms to analyse different parameters including the sender, topic, date, and content. The precision of TrustVare Email Duplicate Remover's duplicate detection reduces the chance of removing crucial or unique communications. It's simple to erase duplicate files using this utility without losing any data. Users do not require the installation of additional third-party tools in order to eliminate duplicate emails from the file. This application may be used to delete duplicate emails by both technical and non-technical users. With this advance tool, Users may eliminate many kinds of duplicate email, including OST, PST, MBOX, MSG, EML, EMLX, and more. Use the app's free sample version to learn more about its features and capabilities.
Read More About the Software:- https://www.trustvare.com/duplicate-remover/ |
wordcab/whisper-large-int8-fp16-he | wordcab | 2023-06-21T08:44:20Z | 2 | 0 | transformers | [
"transformers",
"he",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-20T15:59:26Z | ---
license: apache-2.0
language:
- he
---
This is a ctranslate2 `int8_float16` version of the [Shiry/whisper-large-v2-he](https://huggingface.co/Shiry/whisper-large-v2-he) model. |
piotrtrochim/my_awesome_qa_model | piotrtrochim | 2023-06-21T08:43:48Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-21T08:38:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4635 |
| 2.7755 | 2.0 | 500 | 1.8102 |
| 2.7755 | 3.0 | 750 | 1.7193 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
wordcab/whisper-large-fp16-he | wordcab | 2023-06-21T08:43:07Z | 1 | 0 | transformers | [
"transformers",
"he",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-20T15:22:14Z | ---
license: apache-2.0
language:
- he
---
This is a ctranslate2 `float16` version of the [Shiry/whisper-large-v2-he](https://huggingface.co/Shiry/whisper-large-v2-he) model. |
nolanaatama/frddmrcry19861988r750pchsrvcv2bwl | nolanaatama | 2023-06-21T08:36:47Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T08:12:48Z | ---
license: creativeml-openrail-m
---
|
gokuls/add-bert-emotion_24 | gokuls | 2023-06-21T08:30:02Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T07:38:08Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: add-bert-emotion_24
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.359
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add-bert-emotion_24
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5744
- Accuracy: 0.8053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6013 | 1.0 | 250 | 1.5840 | 0.3435 |
| 1.5884 | 2.0 | 500 | 1.5993 | 0.275 |
| 1.5783 | 3.0 | 750 | 1.5744 | 0.359 |
| 1.5814 | 4.0 | 1000 | 1.5805 | 0.3345 |
| 1.5804 | 5.0 | 1250 | 1.5820 | 0.352 |
| 1.5762 | 6.0 | 1500 | 1.5846 | 0.352 |
| 1.5781 | 7.0 | 1750 | 1.5829 | 0.352 |
| 1.5759 | 8.0 | 2000 | 1.5813 | 0.352 |
| 1.5741 | 9.0 | 2250 | 1.5835 | 0.33 |
| 1.573 | 10.0 | 2500 | 1.5795 | 0.334 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
wytrnyte/ppo-Huggy | wytrnyte | 2023-06-21T08:07:58Z | 18 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-21T08:07:52Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: wytrnyte/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sxandie/san_BERT_newData-oldData-combo_20may | sxandie | 2023-06-21T07:43:17Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T02:52:26Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: sxandie/san_BERT1_newData-oldData
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sxandie/san_BERT1_newData-oldData
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0885
- Validation Loss: 0.1540
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2797 | 0.1903 | 0 |
| 0.1599 | 0.1649 | 1 |
| 0.1224 | 0.1574 | 2 |
| 0.1009 | 0.1533 | 3 |
| 0.0885 | 0.1540 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.2.2
- Tokenizers 0.13.3
|
HasinMDG/RXsent_Deberta_v0 | HasinMDG | 2023-06-21T07:42:19Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-06-21T07:42:00Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/RXsent_Deberta_v0
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/RXsent_Deberta_v0")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
jisung289/qlora-koalpaca-polyglot-12.8b-50step | jisung289 | 2023-06-21T07:34:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T05:53:52Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
gokuls/hbertv2-emotion_48_KD_w_in | gokuls | 2023-06-21T07:34:06Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T07:20:59Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv2-emotion_48_KD_w_in
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8775
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv2-emotion_48_KD_w_in
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4188
- Accuracy: 0.8775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5095 | 1.0 | 250 | 1.1917 | 0.577 |
| 1.1765 | 2.0 | 500 | 1.1315 | 0.5935 |
| 1.0597 | 3.0 | 750 | 1.0012 | 0.644 |
| 0.94 | 4.0 | 1000 | 0.8924 | 0.669 |
| 0.8338 | 5.0 | 1250 | 0.8369 | 0.6795 |
| 0.8026 | 6.0 | 1500 | 0.8184 | 0.695 |
| 0.7102 | 7.0 | 1750 | 0.6761 | 0.7755 |
| 0.5025 | 8.0 | 2000 | 0.4890 | 0.8665 |
| 0.3924 | 9.0 | 2250 | 0.4188 | 0.8775 |
| 0.36 | 10.0 | 2500 | 0.4127 | 0.8765 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rithwik-db/embeddings-genq-db-small | rithwik-db | 2023-06-21T07:25:25Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-06-21T07:25:20Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# rithwik-db/embeddings-genq-db-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('rithwik-db/embeddings-genq-db-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rithwik-db/embeddings-genq-db-small')
model = AutoModel.from_pretrained('rithwik-db/embeddings-genq-db-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/embeddings-genq-db-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 623 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
gsarti/it5-large-wiki-summarization | gsarti | 2023-06-21T07:22:58Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"wikipedia",
"summarization",
"wits",
"it",
"dataset:wits",
"arxiv:2203.03759",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language:
- it
license: apache-2.0
datasets:
- wits
tags:
- italian
- sequence-to-sequence
- wikipedia
- summarization
- wits
widget:
- text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati."
- text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. "
- text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. "
metrics:
- rouge
- bertscore
model-index:
- name: it5-large-wiki-summarization
results:
- task:
type: wiki-summarization
name: "Wikipedia Summarization"
dataset:
type: wits
name: "WITS"
metrics:
- type: rouge1
value: 0.335
name: "Test Rouge1"
- type: rouge2
value: 0.191
name: "Test Rouge2"
- type: rougeL
value: 0.301
name: "Test RougeL"
- type: bertscore
value: 0.508
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "51g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Large for Wikipedia Summarization ✂️📑 🇮🇹
This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
wikisum = pipeline("summarization", model='it5/it5-large-wiki-summarization')
wikisum("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. ")
>>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-wiki-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-wiki-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
TheBloke/guanaco-13B-GGML | TheBloke | 2023-06-21T07:22:56Z | 0 | 31 | null | [
"arxiv:2305.14314",
"arxiv:2302.13971",
"arxiv:2304.07327",
"license:other",
"region:us"
] | null | 2023-05-25T14:43:00Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Tim Dettmers' Guanaco 13B GGML
These files are GGML format model files for [Tim Dettmers' Guanaco 13B](https://huggingface.co/timdettmers/guanaco-13b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-13B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/guanaco-13B-HF)
## Prompt template
```
### Human: prompt
### Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| guanaco-13B.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB | 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| guanaco-13B.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| guanaco-13B.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| guanaco-13B.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| guanaco-13B.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
| guanaco-13B.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| guanaco-13B.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| guanaco-13B.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| guanaco-13B.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| guanaco-13B.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| guanaco-13B.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| guanaco-13B.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| guanaco-13B.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| guanaco-13B.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m guanaco-13B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Tim Dettmers' Guanaco 13B
# Guanaco Models Based on LLaMA
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs.
## Why use Guanaco?
- **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models).
- **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems.
- **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs.
Guanaco is based on LLaMA and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Guanaco 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model.
**Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
**Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages.
Next, we describe Training and Evaluation details.
### Training
Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length
---|---|---|---|---|---
7B | OASST1 | 16 | 2e-4 | 1875 | 512
13B | OASST1 | 16 | 2e-4 | 1875 | 512
33B | OASST1 | 16 | 1e-4 | 1875 | 512
65B | OASST1 | 16 | 1e-4 | 1875 | 512
### Evaluation
We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively.
In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders.
Benchmark | Vicuna | | Vicuna | | OpenAssistant | | -
-----------|----|-----|--------|---|---------------|---|---
Prompts | 80 | | 80 | | 953 | |
Judge | Human | | GPT-4 | | GPT-4 | |
Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank**
GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1
Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2
Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4
ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5
Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5
Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6
Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7
Bard | 909 | 8 | 902 | 7 | - | - | 8
We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset.
| | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B |
|----------------------|-----------|-------|----------|---------------|
| Gender | 70.6 | 62.6 | 65.7 | **47.5** |
| Religion | {79.0} | 73.3 | 68.6 | **38.7** |
| Race/Color | 57.0 | 64.7 | 68.6 | **45.3** |
| Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** |
| Age | 70.1 | 64.4 | 67.8 | **36.3** |
| Nationality | 64.2 | 61.6 | 62.9 | **32.4** |
| Disability | 66.7 | 76.7 | 76.7 | **33.9** |
| Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** |
| Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** |
| Average | 66.6 | 67.2 | 69.5 | **43.5** |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
TheBloke/guanaco-65B-GGML | TheBloke | 2023-06-21T07:19:00Z | 0 | 100 | null | [
"arxiv:2305.14314",
"arxiv:2302.13971",
"arxiv:2304.07327",
"license:other",
"region:us"
] | null | 2023-05-25T16:42:57Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Tim Dettmers' Guanaco 65B GGML
These files are GGML format model files for [Tim Dettmers' Guanaco 65B](https://huggingface.co/timdettmers/guanaco-65b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-65B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-65B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/guanaco-65B-HF)
## Prompt template
```
### Human: prompt
### Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| guanaco-65B.ggmlv3.q2_K.bin | q2_K | 2 | 27.33 GB | 29.83 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| guanaco-65B.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.55 GB | 37.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| guanaco-65B.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.40 GB | 33.90 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| guanaco-65B.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.06 GB | 30.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| guanaco-65B.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB | 39.23 GB | Original llama.cpp quant method, 4-bit. |
| guanaco-65B.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB | 43.31 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| guanaco-65B.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.28 GB | 41.78 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| guanaco-65B.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.73 GB | 39.23 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| guanaco-65B.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB | 47.39 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| guanaco-65B.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| guanaco-65B.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.20 GB | 48.70 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| guanaco-65B.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.89 GB | 47.39 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| guanaco-65B.ggmlv3.q6_K.bin | q6_K | 6 | 53.56 GB | 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| guanaco-65B.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### q6_K and q8_0 files require expansion from archive
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, it is just storing the .bin file in two parts.
### q6_K
Please download:
* `guanaco-65B.ggmlv3.q6_K.zip`
* `guanaco-65B.ggmlv3.q6_K.z01`
### q8_0
Please download:
* `guanaco-65B.ggmlv3.q8_0.zip`
* `guanaco-65B.ggmlv3.q8_0.z01`
Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
```
sudo apt update -y && sudo apt install 7zip
7zz x guanaco-65B.ggmlv3.q6_K.zip`
```
Once the `.bin` is extracted you can delete the `.zip` and `.z01` files
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m guanaco-65B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Tim Dettmers' Guanaco 65B
# Guanaco Models Based on LLaMA
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs.
## Why use Guanaco?
- **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models).
- **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems.
- **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs.
Guanaco is based on LLaMA and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Guanaco 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model.
**Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
**Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages.
Next, we describe Training and Evaluation details.
### Training
Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length
---|---|---|---|---|---
7B | OASST1 | 16 | 2e-4 | 1875 | 512
13B | OASST1 | 16 | 2e-4 | 1875 | 512
33B | OASST1 | 16 | 1e-4 | 1875 | 512
65B | OASST1 | 16 | 1e-4 | 1875 | 512
### Evaluation
We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively.
In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders.
Benchmark | Vicuna | | Vicuna | | OpenAssistant | | -
-----------|----|-----|--------|---|---------------|---|---
Prompts | 80 | | 80 | | 953 | |
Judge | Human | | GPT-4 | | GPT-4 | |
Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank**
GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1
Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2
Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4
ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5
Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5
Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6
Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7
Bard | 909 | 8 | 902 | 7 | - | - | 8
We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset.
| | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B |
|----------------------|-----------|-------|----------|---------------|
| Gender | 70.6 | 62.6 | 65.7 | **47.5** |
| Religion | {79.0} | 73.3 | 68.6 | **38.7** |
| Race/Color | 57.0 | 64.7 | 68.6 | **45.3** |
| Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** |
| Age | 70.1 | 64.4 | 67.8 | **36.3** |
| Nationality | 64.2 | 61.6 | 62.9 | **32.4** |
| Disability | 66.7 | 76.7 | 76.7 | **33.9** |
| Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** |
| Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** |
| Average | 66.6 | 67.2 | 69.5 | **43.5** |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
gokuls/hbertv2-emotion_48_KD | gokuls | 2023-06-21T07:17:08Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T07:04:46Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv2-emotion_48_KD
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv2-emotion_48_KD
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1141
- Accuracy: 0.581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5998 | 1.0 | 250 | 1.5899 | 0.352 |
| 1.5872 | 2.0 | 500 | 1.5967 | 0.275 |
| 1.6045 | 3.0 | 750 | 1.6220 | 0.275 |
| 1.5911 | 4.0 | 1000 | 1.4863 | 0.502 |
| 1.3919 | 5.0 | 1250 | 1.3375 | 0.5325 |
| 1.2936 | 6.0 | 1500 | 1.3029 | 0.546 |
| 1.2151 | 7.0 | 1750 | 1.2297 | 0.559 |
| 1.1985 | 8.0 | 2000 | 1.2658 | 0.551 |
| 1.1637 | 9.0 | 2250 | 1.1574 | 0.577 |
| 1.0928 | 10.0 | 2500 | 1.1141 | 0.581 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
s3nh/DialoGPT-large-Morty | s3nh | 2023-06-21T07:11:34Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"en",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-01-31T22:22:09Z | ---
license: openrail
language:
- en
pipeline_tag: conversational
---
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
<img src = 'https://images.unsplash.com/photo-1592564630984-7410f94db184?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1146&q=80'>
### Description
DialogGPT is a variant of the GPT (Generative Pretrained Transformer) language model developed by OpenAI. It's a deep neural network-based language model that's trained on massive amounts of text data to generate human-like text.
DialogGPT uses the transformer architecture, which is a type of neural network designed for processing sequential data such as language. During the training phase, the model is exposed to a large corpus of text and learns to predict the next word in a sequence given the previous words.
In the context of dialog, DialogGPT is trained to predict the response in a conversation, given the context of the conversation. This context can include one or more turns of the conversation, along with any additional information such as the topic of the conversation or the speaker's personality.
At inference time, the model takes the current context of the conversation as input and generates a response. The response is generated by sampling from the model's predicted distribution over the vocabulary.
Overall, DialogGPT provides a flexible and powerful solution for generating human-like text in a conversational context, allowing for the creation of a wide range of applications such as chatbots, conversational agents, and virtual assistants
## Parameters
Model was trained for 40 epochs, using params as follows.
```
per_gpu_train_batch_size: int = 2
self.per_gpu_eval_batch_size: int = 2
self.gradient_accumulation_steps: int = 1
self.learning_rate: float = 5e-5
self.weight_decay: float = 0.0
self.adam_epsilon: float = 1e-8
self.max_grad_norm: int = 1.0
self.num_train_epochs: int = 40
self.max_steps: int = -1
self.warmup_steps: int = 0
self.logging_steps: int = 1000
self.save_steps: int = 3500
self.save_total_limit = None
self.eval_all_checkpoints: bool = False
self.no_cuda: bool = False
self.overwrite_output_dir: bool = True
self.overwrite_cache: bool = True
self.should_continue: bool = False
self.seed: int = 42
self.local_rank: int = -1
self.fp16: bool = False
self.fp16_opt_level: str = 'O1'
```
## Usage
DialoGPT large version, finetuned on Morty's sequences (Rick and Morty Cartoon character).
Simple snippet of how to infer of this model:
```python
from transformers import AutoModelWithLMHead, AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('s3nh/DialoGPT-small-morty')
model = AutoModelWithLMHead.from_pretrained('s3nh/DialoGPT-small-morty')
for step in range(4):
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
print("MortyBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
mahmabuk/Gradios | mahmabuk | 2023-06-21T07:02:34Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T07:02:34Z | ---
license: creativeml-openrail-m
---
|
gokuls/hbertv2-emotion_w_in | gokuls | 2023-06-21T07:02:07Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T06:49:13Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv2-emotion_w_in
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv2-emotion_w_in
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2778
- Accuracy: 0.9305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.647 | 1.0 | 250 | 0.3054 | 0.889 |
| 0.2593 | 2.0 | 500 | 0.2384 | 0.913 |
| 0.1844 | 3.0 | 750 | 0.2164 | 0.923 |
| 0.1493 | 4.0 | 1000 | 0.1990 | 0.9235 |
| 0.1244 | 5.0 | 1250 | 0.1908 | 0.9265 |
| 0.1006 | 6.0 | 1500 | 0.2328 | 0.923 |
| 0.0857 | 7.0 | 1750 | 0.2379 | 0.93 |
| 0.0724 | 8.0 | 2000 | 0.2638 | 0.9235 |
| 0.0591 | 9.0 | 2250 | 0.2778 | 0.9305 |
| 0.0407 | 10.0 | 2500 | 0.3189 | 0.927 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kchen621/Taxi-v3 | kchen621 | 2023-06-21T06:57:44Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-21T06:57:42Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kchen621/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/hbertv2-emotion_48 | gokuls | 2023-06-21T06:47:24Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T06:34:15Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv2-emotion_48
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.8955
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv2-emotion_48
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3617
- Accuracy: 0.8955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3078 | 1.0 | 250 | 1.0127 | 0.613 |
| 0.6832 | 2.0 | 500 | 0.5913 | 0.81 |
| 0.4602 | 3.0 | 750 | 0.4422 | 0.876 |
| 0.3479 | 4.0 | 1000 | 0.5042 | 0.8575 |
| 0.2844 | 5.0 | 1250 | 0.3738 | 0.8825 |
| 0.2439 | 6.0 | 1500 | 0.3575 | 0.886 |
| 0.2029 | 7.0 | 1750 | 0.3617 | 0.8955 |
| 0.1675 | 8.0 | 2000 | 0.3826 | 0.887 |
| 0.1387 | 9.0 | 2250 | 0.3930 | 0.8875 |
| 0.1134 | 10.0 | 2500 | 0.4199 | 0.894 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
alcanteria/zlfa | alcanteria | 2023-06-21T06:37:11Z | 0 | 0 | null | [
"safetensors",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T20:25:33Z | ---
license: creativeml-openrail-m
---
|
gokuls/hbertv2-emotion_48_w_in | gokuls | 2023-06-21T06:32:12Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-21T06:19:02Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: hbertv2-emotion_48_w_in
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.931
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv2-emotion_48_w_in
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1958
- Accuracy: 0.931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6416 | 1.0 | 250 | 0.2852 | 0.8945 |
| 0.2565 | 2.0 | 500 | 0.1956 | 0.9205 |
| 0.1769 | 3.0 | 750 | 0.2171 | 0.9225 |
| 0.1447 | 4.0 | 1000 | 0.1815 | 0.9225 |
| 0.1192 | 5.0 | 1250 | 0.1960 | 0.9235 |
| 0.1023 | 6.0 | 1500 | 0.1958 | 0.931 |
| 0.0811 | 7.0 | 1750 | 0.2533 | 0.928 |
| 0.0698 | 8.0 | 2000 | 0.3094 | 0.9305 |
| 0.0547 | 9.0 | 2250 | 0.2719 | 0.93 |
| 0.0442 | 10.0 | 2500 | 0.3100 | 0.926 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Subsets and Splits